Pandas: Search and match based on two conditions - python

I am using the code below to make a search on a .csv file and match a column in both files and grab a different column I want and add it as a new column. However, I am trying to make the match based on two columns instead of one. Is there a way to do this?
import pandas as pd
df1 = pd.read_csv("matchone.csv")
df2 = pd.read_csv("comingfrom.csv")
def lookup_prod(ip):
for row in df2.itertuples():
if ip in row[1]:
return row[3]
else:
return '0'
df1['want'] = df1['name'].apply(lookup_prod)
df1[df1.want != '0']
print(df1)
#df1.to_csv('file_name.csv')
The code above makes a search from the column name 'samename' in both files and gets the column I request ([3]) from the df2. I want to make the code make a match for both column 'name' and another column 'price' and only if both columns in both df1 and df2 match then the code take the value on ([3]).
df 1 :
name price value
a 10 35
b 10 21
c 10 33
d 10 20
e 10 88
df 2 :
name price want
a 10 123
b 5 222
c 10 944
d 10 104
e 5 213
When the code is run (asking for the want column from d2, based on both if df1 name = df2 name) the produced result is :
name price value want
a 10 35 123
b 10 21 222
c 10 33 944
d 10 20 104
e 10 88 213
However, what I want is if both df1 name = df2 name and df1 price = df2 price, then take the column df2 want, so the desired result is:
name price value want
a 10 35 123
b 10 21 0
c 10 33 944
d 10 20 104
e 10 88 0

You need to use pandas.DataFrame.merge() method with multiple keys:
df1.merge(df2, on=['name','price'], how='left').fillna(0)
Method represents missing values as NaNs, so that the column's dtype changes to float64 but you can change it back after filling the missed values with 0.
Also please be aware that duplicated combinations of name and price in df2 will appear several times in the result.

If you are matching the two dataframes based on the name and the price, you can use df.where and df.isin
df1['want'] = df2['want'].where(df1[['name','price']].isin(df2).all(axis=1)).fillna('0')
df1
name price value want
0 a 10 35 123.0
1 b 10 21 0
2 c 10 33 944.0
3 d 10 20 104.0
4 e 10 88 0

Expanding on https://stackoverflow.com/a/73830294/20110802:
You can add the validate option to the merge in order to avoid duplication on one side (or both):
pd.merge(df1, df2, on=['name','price'], how='left', validate='1:1').fillna(0)
Also, if the float conversion is a problem for you, one option is to do an inner join first and then pd.concat the result with the "leftover" df1 where you already added a constant valued column. Would look something like:
df_inner = pd.merge(df1, df2, on=['name', 'price'], how='inner', validate='1:1')
merged_pairs = set(zip(df_inner.name, df_inner.price))
df_anti = df1.loc[~pd.Series(zip(df1.name, df1.price)).isin(merged_pairs)]
df_anti['want'] = 0
df_result = pd.concat([df_inner, df_anti]) # perhaps ignore_index=True ?
Looks complicated, but should be quite performant because it filters by set. I think there might be a possibility to set name and price as index, merge on index and then filter by index to not having to do the zip-set-shenanigans, bit I'm no expert on multiindex-handling.

#Try this code it will give you expected results
import pandas as pd
df1 = pd.DataFrame({'name' :['a','b','c','d','e'] ,
'price' :[10,10,10,10,10],
'value' : [35,21,33,20,88]})
df2 = pd.DataFrame({'name' :['a','b','c','d','e'] ,
'price' :[10,5,10,10,5],
'want' : [123,222,944,104 ,213]})
new = pd.merge(df1,df2, how='left', left_on=['name','price'], right_on=['name','price'])
print(new.fillna(0))

Related

add rows even if columns mismatch

I have an excel sheet, loaded into a DataFrame, whose tail() looks like this
ix date Type Value1 Value2 Value3
-------------------------------------------
651 01.02.2021 A 105 135 81
652 01.02.2021 B 3 10 1
653 01.02.2021 C 108 145 82
I have another DataFrame that instead look like this
0 02.02.2021 02.02.2021 02.02.2021
1 A B C
Value1 110 4 114
Val2 142 15 157
Value3 96 2 98
I want to add this latter dataframe transposed at the end of the first.
I have tried both append() and pd.concat but since column names do not always match (Value2 != Val2), values in the resulting columns end up being NaN.
If the first dataframe is df1 and the second is df2:
First transpose df2 and reset the index:
df3 = df2.T.reset_index()
If the dataframe df2 is always of the same form, you can simply overwrite the column names:
df3.columns = df1.columns
And then concat:
df = pd.concat([df1,df3],axis=0)
If the order of df2 is not always in the same and the misspellings can be different, you'll have to identify all possible misspellings first and for instance keep them in a dictionary like so:
mapping = {"Value1":"Value1","Value2":"Value2","Value3":"Value3","Val2":"Value2"}
Then assuming the value strings are in the index of df2, you overwrite the index:
df2.index = df2.index.map(mapping )
Afterwards you can perform the steps described above.

How do you remove values from a data frame based on whether they are present in another data frame?

I have 2 data frames, the 1st contains a list of values I am looking to work with and the second contains these values plus a large number of other values. I am looking for the best way to remove the values that do not appear in the 1st data frame from the 2nddata frame to reduce the number of entries I am working with.
Example
Input
DF1
Alpha
code
A
1
D
2
E
3
F
4
DF2
Alpha
code
A
23
B
12
C
1
D
32
E
23
F
45
G
51
H
26
Desired Output:
DF1
Alpha
code
A
1
D
2
E
3
F
4
DF2
Alpha
code
A
23
D
32
E
23
F
45
Assuming that your first column in DF1 is called "Alpha", you can do this:
my_list_DF1 = DF1['Alpha'].unique().tolist() # gets all unique values of first column from DF1 into a list
Then, you can filter your DF2, to include only those values, using isin:
new_DF2 = DF2[DF2['Alpha'].isin(my_list_DF1)]
Which will result in a smaller DF2, only including the common values from the so called 'Alpha' column.
You could do an inner join, dropping all rows that doesn't have an entry and merging all others:
pd.merge(DF1, DF2, on='Alpha', how='inner')
But then you would subsequently have to drop the columns you dont need, and posibly rename if some share a name.

Index match in Pandas?

I am trying to match x value based on their row and column keys. In excel I have used INDEX & MATCH to fetch the correct values, but I am struggling to do the same in Pandas.
Example:
I want to add the highlighted value (saved in df2) to my df['Cost'] column.
I have got df['Weight'] & df['Country'] as keys but I don't know how to use them to look up the highlighted value in df2.
How can I fetch the yellow value into df3['Postage'], which I can then use to add that to my df['Cost'] column?
I hope this makes sense. Let me know i should provide more info.
Edit - more info (sorry, I could not figure out how to copy the output from Jupyter):
When I run [93] I get the following error:
ValueError: Row labels must have same size as column labels
Thanks!
To get the highlighted value 1.75 simply
df2.loc[df2['Country']=='B', 3]
So generalizing the above and using country-weight key pairs from df1:
cost = []
for i in range(df1.shape[0]):
country = df1.loc[i, 'Country']
weight = df1.loc[i, 'Weight']
cost.append(df2.loc[df2['Country']==country, weight]
df1['Cost'] = cost
Or much better:
df1['Cost'] = df1.apply(lambda x: df2.loc[df2['Country']==x['Country'], x['Weight'], axis=1)
for your case use (note [0] is needed to index into array)
row = df1.iloc[1]
df2[df2.Country == row.Country][row.Weight][0]
Hope this helps with .iloc and .loc
d = {chr(ord('A')+r):[c+r*10 for c in range(5)] for r in range(5)}
df = pd.DataFrame(d).transpose()
df.columns=['a','b','c','d','e']
print(df)
print("--------")
print(df.loc['B']['c'])
print(df.iloc[1][2])
output
a b c d e
A 0 1 2 3 4
B 10 11 12 13 14
C 20 21 22 23 24
D 30 31 32 33 34
E 40 41 42 43 44
--------
12
12

using pandas, extract data from long format df and add it to wide format df

I have two dataframes, df1 and df2. df1 has repeat observations arranged in wide format, and df2 in long format.
import pandas as pd
df1 = pd.DataFrame({"ID":[1,2,3],"colA_1":[1,2,3],"date1":["1.1.2001", "2.1.2001","3.1.2001"],"colA_2":[4,5,6],"date2":["1.1.2002", "2.1.2002","3.1.2002"]})
df2 = pd.DataFrame({"ID":[1,1,2,2,3,3],"col1":[1,1.5,2,2.5,3,3.5],"date":["1.1.2001", "1.1.2002","2.1.2001","2.1.2002","3.1.2001","3.1.2002"], "col3":[11,12,13,14,15,16],"col4":[21,22,23,24,25,26]})
df1 looks like:
ID colA_1 date1 colA_2 date2
0 1 1 1.1.2001 4 1.1.2002
1 2 2 2.1.2001 5 2.1.2002
2 3 3 3.1.2001 6 3.1.2002
df2 looks like:
ID col1 date1 col3 col4
0 1 1.0 1.1.2001 11 21
1 1 1.5 1.1.2002 12 22
2 2 2.0 2.1.2001 13 23
3 2 2.5 2.1.2002 14 24
4 3 3.0 3.1.2001 15 25
5 3 3.5 3.1.2002 16 26
6 3 4.0 4.1.2002 17 27
I want to take a given column from df2, "col3", and then:
(1) if the columns "ID" and "date" in df2 match with the columns "ID" and "date1" in df1, I want to put the value in a new column in df1 called "colB_1".
(2) else if the columns "ID" and "date" in df2 match with the columns "ID" and "date2" in df1, I want to put the value in a new column in df1 called "colB_2".
(3) else if the columns "ID" and "date" in df2 have no match with either ("ID" and "date1") or ("ID" and "date2"), I want to ignore these rows.
So, the output of this output dataframe, df3, should look like this:
ID colA_1 date1 colA_2 date2 colB_1 colB_2
0 1 1 1.1.2001 4 1.1.2002 11 12
1 2 2 2.1.2001 5 2.1.2002 13 14
2 3 3 3.1.2001 6 3.1.2002 15 16
What is the best way to do this?
I found this link, but the answer doesn't work for my case. I would like a really explicit way to specify column matching. I think it's possible that df.mask might be able to help me, but I am not sure how to implement it.
e.g.: the following code
df3 = df1.copy()
df3["colB_1"] = ""
df3["colB_2"] = ""
filter1 = (df1["ID"] == df2["ID"]) & (df1["date1"] == df2["date"])
filter2 = (df1["ID"] == df2["ID"]) & (df1["date2"] == df2["date"])
df3["colB_1"] = df.mask(filter1, other=df2["col3"])
df3["colB_2"] = df.mask(filter2, other=df2["col3"])
gives the error
ValueError: Can only compare identically-labeled Series objects
I asked this question previously, and it was marked as closed; my question was marked as a duplicate of this one. However, this is not the case. The answers in the linked question suggest the use of either map or df.merge. Map does not work with multiple conditions (in my case, ID and date). And df.merge (the answer given for matching multiple columns) does not work in my case when one of the column names in df1 and df2 that are to be merged are different ("date" and "date1", for example).
For example, the below code:
df3 = df1.merge(df2[["ID","date","col3"]], on=['ID','date1'], how='left')
fails with a Key Error.
Also noteworthy is that I will be dealing with many different files, with many different column naming schemes, and I will need a different subset each time. This is why I would like an answer that explicitly names the columns and conditions.
Any help with this would be much appreciated.
You can the pd.wide_to_long after replacing the underscore , this will unpivot the dataframe which you can use to merge with df2 and then pivot back using unstack:
m =df1.rename(columns=lambda x: x.replace('_',''))
unpiv = pd.wide_to_long(m,['colA','date'],'ID','v').reset_index()
merge_piv = (unpiv.merge(df2[['ID','date','col3']],on=['ID','date'],how='left')
.set_index(['ID','v'])['col3'].unstack().add_prefix('colB_'))
final = df1.merge(merge_piv,left_on='ID',right_index=True)
ID colA_1 date1 colA_2 date2 colB_1 colB_2
0 1 1 1.1.2001 4 1.1.2002 11 12
1 2 2 2.1.2001 5 2.1.2002 13 14
2 3 3 3.1.2001 6 3.1.2002 15 16

NaNs after merging two dataframes

I have two dataframes like the following:
df1
id name
-------------------------
0 43 c
1 23 t
2 38 j
3 9 s
df2
user id
--------------------------------------------------
0 222087 27,26
1 1343649 6,47,17
2 404134 18,12,23,22,27,43,38,20,35,1
3 1110200 9,23,2,20,26,47,37
I want to split all the ids in df2 into multiple rows and join the resultant dataframe to df1 on "id".
I do the following:
b = pd.DataFrame(df2['id'].str.split(',').tolist(), index=df2.user_id).stack()
b = b.reset_index()[[0, 'user_id']] # var1 variable is currently labeled 0
b.columns = ['Item_id', 'user_id']
When I try to merge, I get NaNs in the resultant dataframe.
pd.merge(b, df1, on = "id", how="left")
id user name
-------------------------------------
0 27 222087 NaN
1 26 222087 NaN
2 6 1343649 NaN
3 47 1343649 NaN
4 17 1343649 NaN
So, I tried doing the following:
b['name']=np.nan
for i in range(0, len(df1)):
b['name'][(b['id'] == df1['id'][i])] = df1['name'][i]
It still gives the same result as above. I am confused as to what could cause this because I am sure both of them should work!
Any help would be much appreciated!
I read similar posts on SO but none seemed to have a concrete answer. I am also not sure if this is not at all related to coding or not.
Thanks in advance!
Problem is you need convert column id in df2 to int, because output of string functions is always string, also if works with numeric.
df2.id = df2.id.astype(int)
Another solution is convert df1.id to string:
df1.id = df1.id.astype(str)
And get NaNs because no match - str values doesnt match with int values.

Categories

Resources