Pandas remove duplicates by condition (not with regular drop_duplicate) [duplicate] - python

I have a pandas data frame which looks like this.
Column1 Column2 Column3
0 cat 1 C
1 dog 1 A
2 cat 1 B
I want to identify that cat and bat are same values which have been repeated and hence want to remove one record and preserve only the first record. The resulting data frame should only have.
Column1 Column2 Column3
0 cat 1 C
1 dog 1 A

Using drop_duplicates with subset with list of columns to check for duplicates on and keep='first' to keep first of duplicates.
If dataframe is:
df = pd.DataFrame({'Column1': ["'cat'", "'toy'", "'cat'"],
'Column2': ["'bat'", "'flower'", "'bat'"],
'Column3': ["'xyz'", "'abc'", "'lmn'"]})
print(df)
Result:
Column1 Column2 Column3
0 'cat' 'bat' 'xyz'
1 'toy' 'flower' 'abc'
2 'cat' 'bat' 'lmn'
Then:
result_df = df.drop_duplicates(subset=['Column1', 'Column2'], keep='first')
print(result_df)
Result:
Column1 Column2 Column3
0 'cat' 'bat' 'xyz'
1 'toy' 'flower' 'abc'

import pandas as pd
df = pd.DataFrame({"Column1":["cat", "dog", "cat"],
"Column2":[1,1,1],
"Column3":["C","A","B"]})
df = df.drop_duplicates(subset=['Column1'], keep='first')
print(df)

Inside the drop_duplicates() method of Dataframe you can provide a series of column names to eliminate duplicate records from your data.
The following "Tested" code does the same :
import pandas as pd
df = pd.DataFrame()
df.insert(loc=0,column='Column1',value=['cat', 'toy', 'cat'])
df.insert(loc=1,column='Column2',value=['bat', 'flower', 'bat'])
df.insert(loc=2,column='Column3',value=['xyz', 'abc', 'lmn'])
df = df.drop_duplicates(subset=['Column1','Column2'],keep='first')
print(df)
Inside of the subset parameter, you can insert other column names as well and by default it will consider all the columns of your data and you can provide keep value as :-
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.

Use drop_duplicates() by using column name
import pandas as pd
data = pd.read_excel('your_excel_path_goes_here.xlsx')
#print(data)
data.drop_duplicates(subset=["Column1"], keep="first")
keep=first to instruct Python to keep the first value and remove other columns duplicate values.
keep=last to instruct Python to keep the last value and remove other columns duplicate values.
Suppose we want to remove all duplicate values in the excel sheet. We can specify keep=False

Related

How to remove duplicates within a column in a csv using pandas? python [duplicate]

The pandas drop_duplicates function is great for "uniquifying" a dataframe. I would like to drop all rows which are duplicates across a subset of columns. Is this possible?
A B C
0 foo 0 A
1 foo 1 A
2 foo 1 B
3 bar 1 A
As an example, I would like to drop rows which match on columns A and C so this should drop rows 0 and 1.
This is much easier in pandas now with drop_duplicates and the keep parameter.
import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.drop_duplicates(subset=['A', 'C'], keep=False)
Just want to add to Ben's answer on drop_duplicates:
keep : {‘first’, ‘last’, False}, default ‘first’
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
So setting keep to False will give you desired answer.
DataFrame.drop_duplicates(*args, **kwargs) Return DataFrame with
duplicate rows removed, optionally only considering certain columns
Parameters: subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by default
use all of the columns keep : {‘first’, ‘last’, False}, default
‘first’ first : Drop duplicates except for the first occurrence. last
: Drop duplicates except for the last occurrence. False : Drop all
duplicates. take_last : deprecated inplace : boolean, default False
Whether to drop duplicates in place or to return a copy cols : kwargs
only argument of subset [deprecated] Returns: deduplicated :
DataFrame
If you want result to be stored in another dataset:
df.drop_duplicates(keep=False)
or
df.drop_duplicates(keep=False, inplace=False)
If same dataset needs to be updated:
df.drop_duplicates(keep=False, inplace=True)
Above examples will remove all duplicates and keep one, similar to DISTINCT * in SQL
use groupby and filter
import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.groupby(["A", "C"]).filter(lambda df:df.shape[0] == 1)
Try these various things
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar","foo"], "B":[0,1,1,1,1], "C":["A","A","B","A","A"]})
>>>df.drop_duplicates( "A" , keep='first')
or
>>>df.drop_duplicates( keep='first')
or
>>>df.drop_duplicates( keep='last')
Actually, drop rows 0 and 1 only requires (any observations containing matched A and C is kept.):
In [335]:
df['AC']=df.A+df.C
In [336]:
print df.drop_duplicates('C', take_last=True) #this dataset is a special case, in general, one may need to first drop_duplicates by 'c' and then by 'a'.
A B C AC
2 foo 1 B fooB
3 bar 1 A barA
[2 rows x 4 columns]
But I suspect what you really want is this (one observation containing matched A and C is kept.):
In [337]:
print df.drop_duplicates('AC')
A B C AC
0 foo 0 A fooA
2 foo 1 B fooB
3 bar 1 A barA
[3 rows x 4 columns]
Edit:
Now it is much clearer, therefore:
In [352]:
DG=df.groupby(['A', 'C'])
print pd.concat([DG.get_group(item) for item, value in DG.groups.items() if len(value)==1])
A B C
2 foo 1 B
3 bar 1 A
[2 rows x 3 columns]
You can use duplicated() to flag all duplicates and filter out flagged rows. If you need to assign columns to new_df later, make sure to call .copy() so that you don't get SettingWithCopyWarning later on.
new_df = df[~df.duplicated(subset=['A', 'C'], keep=False)].copy()
One nice feature of this method is that you can conditionally drop duplicates with it. For example, to drop all duplicated rows only if column A is equal to 'foo', you can use the following code.
new_df = df[~( df.duplicated(subset=['A', 'B', 'C'], keep=False) & df['A'].eq('foo') )].copy()
Also, if you don't wish to write out columns by name, you can pass slices of df.columns to subset=. This is also true for drop_duplicates() as well.
# to consider all columns for identifying duplicates
df[~df.duplicated(subset=df.columns, keep=False)].copy()
# the same is true for drop_duplicates
df.drop_duplicates(subset=df.columns, keep=False)
# to consider columns in positions 0 and 2 (i.e. 'A' and 'C') for identifying duplicates
df.drop_duplicates(subset=df.columns[[0, 2]], keep=False)
If you want to check 2 columns with try and except statements, this one can help out.
if "column_2" in df.columns:
try:
df[['column_1', "column_2"]] = df[['header', "column_2"]].drop_duplicates(subset = ["column_2", "column_1"] ,keep="first")
except:
df[["column_2"]] = df[["column_2"]].drop_duplicates(subset="column_2" ,keep="first")
print(f"No column_1 for {path}.")
try:
df[["column_1"]] = df[["column_1"]].drop_duplicates(subset="column_1" ,keep="first")
except:
print(f"No column_1 or column_2 for {path}.")

Assign counts from .count() to a dataframe + column names - pandas python

Hoping someone can help me here - i believe i am close to the solution.
I have a dataframe, of which i have am using .count() in order to return a series of all column names of my dataframe, and each of their respective non-NAN value counts.
Example dataframe:
feature_1
feature_2
1
1
2
NaN
3
2
4
NaN
5
3
Example result for .count() here would output a series that looks like:
feature_1 5
feature_2 3
I am now trying to get this data into a dataframe, with the column names "Feature" and "Count". To have the expected output look like this:
Feature
Count
feature_1
5
feature_2
3
I am using .to_frame() to push the series to a dataframe in order to add column names. Full code:
df = data.count()
df = df.to_frame()
df.columns = ['Feature', 'Count']
However receiving this error message - "ValueError: Length mismatch: Expected axis has 1 elements, new values have 2 elements", as if though it is not recognising the actual column names (Feature) as a column with values.
How can i get it to recognise both Feature and Count columns to be able to add column names to them?
Add Series.reset_index instead Series.to_frame for 2 columns DataFrame - first column from index, second from values of Series:
df = data.count().reset_index()
df.columns = ['Feature', 'Count']
print (df)
Feature Count
0 feature_1 5
1 feature_2 3
Another solution with name parameter and Series.rename_axis or with DataFrame.set_axis:
df = data.count().rename_axis('Feature').reset_index(name='Count')
#alternative
df = data.count().reset_index().set_axis(['Feature', 'Count'], axis=1)
print (df)
Feature Count
0 feature_1 5
1 feature_2 3
This happens because your new dataframe has only one column (the column name is taken as series index, then translated into dataframe index with the func to_frame()). In order to assign a 2 elements list to df.columns you have to reset the index first:
df = data.count()
df = df.to_frame().reset_index()
df.columns = ['Feature', 'Count']

How to filter out duplicate rows in a pandas DataFrame using Python? [duplicate]

The pandas drop_duplicates function is great for "uniquifying" a dataframe. I would like to drop all rows which are duplicates across a subset of columns. Is this possible?
A B C
0 foo 0 A
1 foo 1 A
2 foo 1 B
3 bar 1 A
As an example, I would like to drop rows which match on columns A and C so this should drop rows 0 and 1.
This is much easier in pandas now with drop_duplicates and the keep parameter.
import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.drop_duplicates(subset=['A', 'C'], keep=False)
Just want to add to Ben's answer on drop_duplicates:
keep : {‘first’, ‘last’, False}, default ‘first’
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
So setting keep to False will give you desired answer.
DataFrame.drop_duplicates(*args, **kwargs) Return DataFrame with
duplicate rows removed, optionally only considering certain columns
Parameters: subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by default
use all of the columns keep : {‘first’, ‘last’, False}, default
‘first’ first : Drop duplicates except for the first occurrence. last
: Drop duplicates except for the last occurrence. False : Drop all
duplicates. take_last : deprecated inplace : boolean, default False
Whether to drop duplicates in place or to return a copy cols : kwargs
only argument of subset [deprecated] Returns: deduplicated :
DataFrame
If you want result to be stored in another dataset:
df.drop_duplicates(keep=False)
or
df.drop_duplicates(keep=False, inplace=False)
If same dataset needs to be updated:
df.drop_duplicates(keep=False, inplace=True)
Above examples will remove all duplicates and keep one, similar to DISTINCT * in SQL
use groupby and filter
import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.groupby(["A", "C"]).filter(lambda df:df.shape[0] == 1)
Try these various things
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar","foo"], "B":[0,1,1,1,1], "C":["A","A","B","A","A"]})
>>>df.drop_duplicates( "A" , keep='first')
or
>>>df.drop_duplicates( keep='first')
or
>>>df.drop_duplicates( keep='last')
Actually, drop rows 0 and 1 only requires (any observations containing matched A and C is kept.):
In [335]:
df['AC']=df.A+df.C
In [336]:
print df.drop_duplicates('C', take_last=True) #this dataset is a special case, in general, one may need to first drop_duplicates by 'c' and then by 'a'.
A B C AC
2 foo 1 B fooB
3 bar 1 A barA
[2 rows x 4 columns]
But I suspect what you really want is this (one observation containing matched A and C is kept.):
In [337]:
print df.drop_duplicates('AC')
A B C AC
0 foo 0 A fooA
2 foo 1 B fooB
3 bar 1 A barA
[3 rows x 4 columns]
Edit:
Now it is much clearer, therefore:
In [352]:
DG=df.groupby(['A', 'C'])
print pd.concat([DG.get_group(item) for item, value in DG.groups.items() if len(value)==1])
A B C
2 foo 1 B
3 bar 1 A
[2 rows x 3 columns]
You can use duplicated() to flag all duplicates and filter out flagged rows. If you need to assign columns to new_df later, make sure to call .copy() so that you don't get SettingWithCopyWarning later on.
new_df = df[~df.duplicated(subset=['A', 'C'], keep=False)].copy()
One nice feature of this method is that you can conditionally drop duplicates with it. For example, to drop all duplicated rows only if column A is equal to 'foo', you can use the following code.
new_df = df[~( df.duplicated(subset=['A', 'B', 'C'], keep=False) & df['A'].eq('foo') )].copy()
Also, if you don't wish to write out columns by name, you can pass slices of df.columns to subset=. This is also true for drop_duplicates() as well.
# to consider all columns for identifying duplicates
df[~df.duplicated(subset=df.columns, keep=False)].copy()
# the same is true for drop_duplicates
df.drop_duplicates(subset=df.columns, keep=False)
# to consider columns in positions 0 and 2 (i.e. 'A' and 'C') for identifying duplicates
df.drop_duplicates(subset=df.columns[[0, 2]], keep=False)
If you want to check 2 columns with try and except statements, this one can help out.
if "column_2" in df.columns:
try:
df[['column_1', "column_2"]] = df[['header', "column_2"]].drop_duplicates(subset = ["column_2", "column_1"] ,keep="first")
except:
df[["column_2"]] = df[["column_2"]].drop_duplicates(subset="column_2" ,keep="first")
print(f"No column_1 for {path}.")
try:
df[["column_1"]] = df[["column_1"]].drop_duplicates(subset="column_1" ,keep="first")
except:
print(f"No column_1 or column_2 for {path}.")

Error while deleting duplicates with drop_duplicates [duplicate]

I have a pandas data frame which looks like this.
Column1 Column2 Column3
0 cat 1 C
1 dog 1 A
2 cat 1 B
I want to identify that cat and bat are same values which have been repeated and hence want to remove one record and preserve only the first record. The resulting data frame should only have.
Column1 Column2 Column3
0 cat 1 C
1 dog 1 A
Using drop_duplicates with subset with list of columns to check for duplicates on and keep='first' to keep first of duplicates.
If dataframe is:
df = pd.DataFrame({'Column1': ["'cat'", "'toy'", "'cat'"],
'Column2': ["'bat'", "'flower'", "'bat'"],
'Column3': ["'xyz'", "'abc'", "'lmn'"]})
print(df)
Result:
Column1 Column2 Column3
0 'cat' 'bat' 'xyz'
1 'toy' 'flower' 'abc'
2 'cat' 'bat' 'lmn'
Then:
result_df = df.drop_duplicates(subset=['Column1', 'Column2'], keep='first')
print(result_df)
Result:
Column1 Column2 Column3
0 'cat' 'bat' 'xyz'
1 'toy' 'flower' 'abc'
import pandas as pd
df = pd.DataFrame({"Column1":["cat", "dog", "cat"],
"Column2":[1,1,1],
"Column3":["C","A","B"]})
df = df.drop_duplicates(subset=['Column1'], keep='first')
print(df)
Inside the drop_duplicates() method of Dataframe you can provide a series of column names to eliminate duplicate records from your data.
The following "Tested" code does the same :
import pandas as pd
df = pd.DataFrame()
df.insert(loc=0,column='Column1',value=['cat', 'toy', 'cat'])
df.insert(loc=1,column='Column2',value=['bat', 'flower', 'bat'])
df.insert(loc=2,column='Column3',value=['xyz', 'abc', 'lmn'])
df = df.drop_duplicates(subset=['Column1','Column2'],keep='first')
print(df)
Inside of the subset parameter, you can insert other column names as well and by default it will consider all the columns of your data and you can provide keep value as :-
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
Use drop_duplicates() by using column name
import pandas as pd
data = pd.read_excel('your_excel_path_goes_here.xlsx')
#print(data)
data.drop_duplicates(subset=["Column1"], keep="first")
keep=first to instruct Python to keep the first value and remove other columns duplicate values.
keep=last to instruct Python to keep the last value and remove other columns duplicate values.
Suppose we want to remove all duplicate values in the excel sheet. We can specify keep=False

How to compare column values of pandas groupby object and summarize them in a new column row

I have the following problem: I want to create a column in a dataframe summarizing all values in a row. Then I want to compare the rows of that column to create a single row containg all the values from all columns, but so that each value is only present a single time. As example: I have the following data frame
df1:
Column1 Column2
0 a 1,2,3
1 a 1,4,5
2 b 7,1,5
3 c 8,9
4 b 7,3,5
the desired output would now be:
df1_new:
Column1 Column2
0 a 1,2,3,4,5
1 b 1,3,5,7
2 c 8,9
What I am currently trying is result = df1.groupby('Column1'), but then I don't know how to compare the values in the rows of the grouped objects and then write them to the new column and removing the duplicates. I read through the pandas documentation of Group By: split-apply-combine but could not figure out a way to do it. I also wonder if, once I have my desired output, there is a way to check in how many of the lines in the grouped object each value in Column2 of df1_new appeared. Any help on this would be greatly appreciated!
A method by which you can do this would be to apply a function on the grouped DataFrame.
This function would first convert the series (for each group) to a list, and then in the list split each string using , and then chain the complete list into a single list using itertools.chain.from_iterable and then convert that to set so that only unique values are left and then sort it and then convert back to string using str.join . Example -
from itertools import chain
def applyfunc(x):
ch = chain.from_iterable(y.split(',') for y in x.tolist())
return ','.join(sorted(set(ch)))
df1_new = df1.groupby('Column1')['Column2'].apply(func1).reset_index()
Demo -
In [46]: df
Out[46]:
Column1 Column2
0 a 1,2,3
1 a 1,4,5
2 b 7,1,5
3 c 8,9
4 b 7,3,5
In [47]: from itertools import chain
In [48]: def applyfunc(x):
....: ch = chain.from_iterable(y.split(',') for y in x.tolist())
....: return ','.join(sorted(set(ch)))
....:
In [49]: df.groupby('Column1')['Column2'].apply(func1).reset_index()
Out[49]:
Column1 Column2
0 a 1,2,3,4,5
1 b 1,3,5,7
2 c 8,9
What about this:
df1
Column1 Column2
0 a 1,2,3
1 a 1,4,5
2 b 7,1,5
3 c 8,9
4 b 7,3,5
df1.groupby('Column1').\
agg(lambda x: ','.join(x).split(','))['Column2'].\
apply(lambda x: ','.join(np.unique(x))).reset_index()
Column1 Column2
0 a 1,2,3,4,5
1 b 1,3,5,7
2 c 8,9

Categories

Resources