replace values to a dataframe column from another dataframe - python

I am trying to replace values from a dataframe column with values from another based on a third one and keep the rest of the values from the first df.
# df1
country name value
romania john 100
russia emma 200
sua mark 300
china jack 400
# df2
name value
emma 2
mark 3
Desired result:
# df3
country name value
romania john 100
russia emma 2
sua mark 3
china jack 400
Thank you

One approach could be as follows:
Use Series.map on column name and turn df2 into a Series for mapping by setting its index to name (df.set_index).
Next, chain Series.fillna to replace NaN values with original values from df.value (i.e. whenever mapping did not result in a match) and assign to df['value'].
df['value'] = df['name'].map(df2.set_index('name')['value']).fillna(df['value'])
print(df)
country name value
0 romania john 100.0
1 russia emma 2.0
2 sua mark 3.0
3 china jack 400.0
N.B. The result will now contain floats. If you prefer integers, chain .astype(int) as well.

Another option could be using pandas.DataFrame.Update:
df1.set_index('name', inplace=True)
df1.update(df2.set_index('name'))
df1.reset_index(inplace=True)
name country value
0 john romania 100.0
1 emma russia 2.0
2 mark sua 3.0
3 jack china 400.0

Another option:
df3 = df1.merge(df2, on = 'name', how = 'left')
df3['value'] = df3.value_y.fillna(df3.value_x)
df3.drop(['value_x', 'value_y'], axis = 1, inplace = True)
# country name value
# 0 romania john 100.0
# 1 russia emma 2.0
# 2 sua mark 3.0
# 3 china jack 400.0
Reproducible data:
df1=pd.DataFrame({'country':['romania','russia','sua','china'],'name':['john','emma','mark','jack'],'value':[100,200,300,400]})
df2=pd.DataFrame({'name':['emma','mark'],'value':[2,3]})

Related

Check if multiple rows are filled and if not blank all values

I have the following dataframe:
ID Name Date Code Manager
1 Paulo % 10 Jonh's Peter
2 Pedro 2001-01-20 20
3 James 30
4 Sofia 2001-01-20 40 -
I need a way to check if multiple columns (in this case Date, Code and Manager) are filled with any value. In the case there is any blank in any of those 3 columns, blank all the values in all 3 columns, returning this data:
ID Name Date Code Manager
1 Paulo % 10 Jonh's Peter
2 Pedro
3 James
4 Sofia 2001-01-20 40 -
What is the best solution for this case?
You can use pandas.DataFrame.loc to replace values in certain colums and based on a condition .
Considering that your dataframe is named df:
You can use this code to get the expected output :
df.loc[df[["Date", "Code", "Manager"]].isna().any(axis=1), ["Date", "Code", "Manager"]] = ''
>>> print(df)
You can use pandas.isnull() with pandas.any(axis=1) then use pandas.loc for setting what value that you want.
col_chk = ['Date', 'Code', 'Manager']
m = df[col_chk].isnull().any(axis=1)
df.loc[m , col_chk] = '' # or pd.NA
print(df)
ID Name Date Code Manager
0 1 Paulo % 10.0 Jonh's Peter
1 2 Pedro
2 3 James
3 4 Sofia 2001-01-20 40.0 -

In-place update in pandas: update the value of the cell based on a condition

DOB Name
0 1956-10-30 Anna
1 1993-03-21 Jerry
2 2001-09-09 Peter
3 1993-01-15 Anna
4 1999-05-02 James
5 1962-12-17 Jerry
6 1972-05-04 Kate
In the dataframe similar to the one above where I have duplicate names. So I am want to add a suffix '_0' to the name if DOB is before 1990 and a duplicate name.
I am expecting a result like this
DOB Name
0 1956-10-30 Anna_0
1 1993-03-21 Jerry
2 2001-09-09 Peter
3 1993-01-15 Anna
4 1999-05-02 James
5 1962-12-17 Jerry_0
6 1972-05-04 Kate
I am using the following
df['Name'] = df[(df['DOB'] < '01-01-1990') & (df['Name'].isin(['Anna','Jerry']))].Name.apply(lambda x: x+'_0')
But I am getting this result
DOB Name
0 1956-10-30 Anna_0
1 1993-03-21 NaN
2 2001-09-09 NaN
3 1993-01-15 NaN
4 1999-05-02 NaN
5 1962-12-17 Jerry_0
6 1972-05-04 NaN
How can I add a suffix to the Name which is a duplicate and have to be born before 1990.
Problem in your df['Name'] = df[(df['DOB'] < '01-01-1990') & (df['Name'].isin(['Anna','Jerry']))].Name.apply(lambda x: x+'_0') is that df[(df['DOB'] < '01-01-1990') & (df['Name'].isin(['Anna','Jerry']))] is a filtered dataframe whose rows are less than the original. When you assign it back, the not filtered rows doesn't have corresponding value in the filtered dataframe, so it becomes NaN.
You can try mask instead
m = (df['DOB'] < '1990-01-01') & df['Name'].duplicated(keep=False)
df['Name'] = df['Name'].mask(m, df['Name']+'_0')
You can use masks and boolean indexing:
# is the year before 1990?
m1 = pd.to_datetime(df['DOB']).dt.year.lt(1990)
# is the name duplicated?
m2 = df['Name'].duplicated(keep=False)
# if both conditions are True, add '_0' to the name
df.loc[m1&m2, 'Name'] += '_0'
output:
DOB Name
0 1956-10-30 Anna_0
1 1993-03-21 Jerry
2 2001-09-09 Peter
3 1993-01-15 Anna
4 1999-05-02 James
5 1962-12-17 Jerry_0
6 1972-05-04 Kate

Counting number of values in a column using groupby on a specific conditon in pandas

I have a dataframe which looks something like this:
dfA
name field country action
Sam elec USA POS
Sam elec USA POS
Sam elec USA NEG
Tommy mech Canada NEG
Tommy mech Canada NEG
Brian IT Spain NEG
Brian IT Spain NEG
Brian IT Spain POS
I want to group the dataframe based on the first 3 columns adding a new column "No of data". This is something which I do using this:
dfB = dfA.groupby(["name", "field", "country"], dropna=False).size().reset_index(name = "No_of_data")
This gives me a new dataframe which looks something like this:
dfB
name field country No_of_data
Sam elec USA 3
Tommy mech Canada 2
Brian IT Spain 3
But now I also want to add a new column to this particular dataframe which tells me what is the count of number of "POS" for every combination of "name", "field" and "country". Which should look something like this:
dfB
name field country No_of_data No_of_POS
Sam elec USA 3 2
Tommy mech Canada 2 0
Brian IT Spain 3 1
How do I add the new column (No_of_POS) to the table dfB when I dont have the information about "POS NEG" in it and needs to be taken from dfA.
You can use a dictionary with functions in the aggregate method:
dfA.groupby(["name", "field", "country"], as_index=False)['action']\
.agg({'No_of_data': 'size', 'No_of_POS': lambda x: x.eq('POS').sum()})
You can precompute the boolean before aggregating; performance should be better as the data size increases :
(df.assign(action = df.action.eq('POS'))
.groupby(['name', 'field', 'country'],
sort = False,
as_index = False)
.agg(no_of_data = ('action', 'size'),
no_of_pos = ('action', 'sum'))
name field country no_of_data no_of_pos
0 Sam elec USA 3 2
1 Tommy mech Canada 2 0
2 Brian IT Spain 3 1
You can add an aggregation function when you're grouping your data. Check agg() function, maybe this will help.

Listing unique value counts per groups in pandas dataframe

I am new to pandas and python.
I am trying to group items by one column and list the information from the data frame per group.
My dataframe:
B C D E F
1 Honda USA 2000 Washington New
2 Honda USA 2001 Salt Lake Used
3 Ford Canada 2005 Washington New
4 Toyota USA 2010 Ney York Used
5 Honda USA 2001 Salt Lake Used
6 Honda Canada 2011 Salt Lake Crashed
7 Ford Italy 2014 Rome New
I am trying to group my dataframe by column B and list how many C, D, E, F column values are in group B. For example we see that in column B there are 4 Honda which I am grouping it together. Then I want to list the following information - USA(3), Canada(1), 2000(1),2001(2), 2011(1), Washington(1), Salt Lake(3), New(1), Used(2), Crashed(1) and do the same per every group ( car make ) in column B:
Car Country Year City Condition
1 Honda(4) USA(3) 2000(1) Washington(1) New(1)
Canada(1) 2001(2) Salt Lake(3) Used(2)
2011(1) Crashed(1)
2 Ford(2) Canada(1) 2005(5) Washington(1) New(2)
Italy(1) 2014(1) Rome(1)
...
What I've tried so far:
df.groupby(['B'])
Which gives me back <pandas.core.groupby.generic.DataFrameGroupBy object at 0x11d559080>
At this point, I am not sure how I should code moving on forward getting the desired results after grouping the column B.
Thank you for your suggestions.
You need lambda function with custom function for processing each column separately with Series.value_counts and then join values of index to values of counts of Series together:
def f(x):
x = x.value_counts()
y = x.index.astype(str) + '(' + x.astype(str) + ')'
return y.reset_index(drop=True)
df1 = df.groupby(['B']).apply(lambda x: x.apply(f)).reset_index(drop=True)
print (df1)
B C D E F
0 Ford(2) Italy(1) 2014(1) Washington(1) New(2)
1 NaN Canada(1) 2005(1) Rome(1) NaN
2 Honda(4) USA(3) 2001(2) Salt Lake(3) Used(2)
3 NaN Canada(1) 2011(1) Washington(1) Crashed(1)
4 NaN NaN 2000(1) NaN New(1)
5 Toyota(1) USA(1) 2010(1) Ney York(1) Used(1)

How to add an index of a another dataframe

I have two different dataframes. First I had to check that the data in my df1 matches my df2. If that were the case, it add a column "isRep" = true otherwise it's equal to false. It created a df3 for me.
Now, I need to add an "idRep" column in my df3 that corresponds to the index, generate automatically with pandas, where to find the data in df2
This is the df1 :
Index Firstname Name Origine
0 Johnny Depp USA
1 Brad Pitt USA
2 Angelina Pitt USA
This is the d2 :
Index Firstname Name Origine
0 Kidman Nicole AUS
1 Jean Dujardin FR
2 Brad Pitt USA
After the merge with this code :
df = pd.merge(data, dataRep, on=['Firstname', 'Name', 'Origine'], how='left', indicator='IsRep')
df['IsRep'] = np.where(df.IsRep == 'both', True, False)
after this code, I got this result which is my df3 (its the same of the df1 but with the column "isRep" ) :
Index Firstname Name Origine isRep
0 Johnny Depp USA False
1 Brad Pitt USA True
2 Angelina Pitt USA False
Now, I need an other dataframe with the column named "idRep" where I put the index corresponds to df2 like that. But I don't know how I can do that:
Index Firstname Name Origine isRep IdRep
0 Johnny Depp USA False -
1 Brad Pitt USA True 2
2 Angelina Pitt USA False -
A bit of a hack would be to reset_index before you merge. Only reset the index on the right DataFrame.
m = dataRep.rename_axis('IdRep').reset_index()
df = pd.merge(data, m, on=['Firstname', 'Name', 'Origine'], how='left', indicator='IsRep')
df['IsRep'] = np.where(df.IsRep == 'both', True, False)
Firstname Name Origine IdRep IsRep
0 Johnny Depp USA NaN False
1 Brad Pitt USA 2.0 True
2 Angelina Pitt USA NaN False
reverse look up using dict
cols = ['Firstname', 'Name', 'Origine']
d = dict(zip(zip(*map(df2.get, cols)), df2.index))
z = [*zip(*map(df1.get, cols))]
df1.assign(
isRep=[*map(d.__contains__, z)],
IdRep=[*map(d.get, z)]
)
Firstname Name Origine isRep IdRep
Index
0 Johnny Depp USA False NaN
1 Brad Pitt USA True 2.0
2 Angelina Pitt USA False NaN
Variation where we take advantage of assign arguments being order dependent
cols = ['Firstname', 'Name', 'Origine']
d = dict(zip(zip(*map(df2.get, cols)), df2.index))
z = [*zip(*map(df1.get, cols))]
df1.assign(
IdRep=[*map(d.get, z)],
isRep=lambda d: d.IdRep.notna()
)

Categories

Resources