This question already has answers here:
Pandas Merging 101
(8 answers)
Closed 4 years ago.
I got two data frames. One contains the main data (called dtt_main) and could be huge, the other one (called dtt_selected) contains only two columns, which are also available in the main data frame. For every entry in the dtt_selected, I want to check whether the same values are included in dtt_main. If so, this row(s) should be removed (these values are not unique in the dtt_main, so one could remove multiple rows by applying this criterion). I managed to write a small function which does exactly this, but is really slow because I have to iterate over both dataframes simultaneously. I would be very happy about a faster, more pandas-like solution. Thanks!
# The real data set contains ~100_000 rows and ~1000 columns
dtt_main = pd.DataFrame({
'a': [1,1,1,2,2,4,5,4],
'b': [1,1,2,2,3,3,4,6],
'data': list('abcdefgh')
})
dtt_selected = pd.DataFrame({
'a': [1,1,2,4],
'b': [1,5,3,6]
})
def remove_selected(dtt_main, dtt_selected):
for row_select in dtt_select.itertuples():
for row_main in dtt_main.itertuples():
# First entry of the tuples is the index!
if (row_select[1] == row_main[1]) & (row_select[2] == row_main[2]):
dtt_main.drop(row_main[0], axis='rows', inplace=True)
remove_selected(dtt_main, dtt_selected)
print(dtt_main)
>>> a b data
>>> 2 1 2 c
>>> 3 2 2 d
>>> 5 4 3 f
>>> 6 5 4 g
You could left join the DataFrames using pd.merge. By setting indicator=True, it adds a column _merge which will have 'both' if it occurs also in dtt_selected (and therefore should be dropped) and 'left_only' if it was only in dtt_main (and thus should be kept). Now in hte next line, you can first keep only the columns that have 'left_only', and then drop the now unnecessary '_merge'-column:
df1 = dtt_main.merge(dtt_selected, how='left', indicator=True)
df1[df1['_merge'] == 'left_only'].drop(columns='_merge')
#Output
# a b data
#2 1 2 c
#3 2 2 d
#5 4 3 f
#6 5 4 g
Related
This question already has answers here:
How do I get a list of all the duplicate items using pandas in python?
(13 answers)
Closed 2 months ago.
i use the pandas.DataFrame.drop_duplicates to search duplicates in a dataframe. This removes the duplicates from the dataframe. This also works great. However, I would like to know which data has been removed.
Is there a way to save the data in a new list before removing it?
I have unfortunately found in the documentation of pandas no information on this.
Thanks for the answer.
It uses duplicated function to filter out the information which is duplicated. By default the first occurrence is set to True, all others set as False, Using this function and filter on original data, you can know which data is kept and which is dropped out.
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.duplicated.html
You can use duplicated and boolean indexing with groupby.agg to keep the list of duplicates:
m = df.duplicated('group')
dropped = df[m].groupby(df['group'])['value'].agg(list)
print(dropped)
df = df[~m]
print(df)
Output:
# print(dropped)
group
A [2]
B [4, 5]
C [7]
Name: value, dtype: object
# print(df)
group value
0 A 1
2 B 3
5 C 6
Used input:
group value
0 A 1
1 A 2
2 B 3
3 B 4
4 B 5
5 C 6
6 C 7
I am trying to drop pandas column in the following way. I have a list with columns to drop. This list will be used many times in my notebook. I have 2 columns which are only referenced once
drop_cols=['var1','var2']
df = df.drop(columns={'var0',drop_cols})
So basically, I want to drop all columns from list drop_cols in addition to a hard-coded "var0" column all in one swoop. This gives an error, How do I resolve?
df = df.drop(columns=drop_cols+['var0'])
From what I gather you have a set of columns you wish to drop from several different dataframes while at the same time adding another unique column to also be dropped a data frame. The command you have used is close but misses the point in that you can't create a concatenated list in the way you are trying to do it. This is how I would approach the problem.
Given a Dataframe of the form:
V0 V1 V2 V3
0 1 2 3 4
1 5 6 7 8
2 9 10 11 12
define a function to merge colnames
def mergeNames(spc_col, multi_cols):
rslt = [spc_col]
rslt.extend(multi+cols)
return rslt
Then with
drop_cols = ['V1', 'V2']
df.drop(columns=mergeNames('V0', drop_cols)
yields:
V3
0 4
1 8
2 12
I'm sure this question must have already been answered somewhere but I couldn't find an answer that suits my case.
I have 2 pandas DataFrames
a = pd.DataFrame({'A1':[1,2,3], 'A2':[2,4,6]}, index=['a','b','c'])
b = pd.DataFrame({'A1':[3,5,6], 'A2':[3,6,9]}, index=['a','c','d'])
I want to merge them in order to obtain something like
result = pd.DataFrame({
'A1' : [3,2,5,6],
'A2' : [3,4,6,9]
}, index=['a','b','c','d'])
Basically, I want a new df with the union of both indexes. Where indexes match, the value in each column should be updated with the one from the second df (in this case b). Where there is no match the value is taken from the starting df (in this case a).
I tried with merge(), join() and concat() but I could not manage to obtain this result.
If the comments are correct and there's indeed a typo in your result, you could use pd.concat to create one dataframe (b being the first one as it is b that has a priority for it's values to be kept over a), and then drop the duplicated index:
Using your sample data:
c = pd.concat([b,a])
c[~c.index.duplicated()].sort_index()
prints:
A1 A2
a 3 3
b 2 4
c 5 6
d 6 9
I have a big dataframe with many duplicates in it. I want to keep the first and last entry of each duplicate but drop every duplicate in between.
I've already tried to get this done by using df.drop_duplicates with the parameters 'first' and 'last' to get two dataframes and then merge them again to one df so I have the first and last entry, but that didn't work.
df_first = df
df_last = df
df_first['Path'].drop_duplicates(keep='first', inplace=True)
df_last['Path'].drop_duplicates(keep='last', inplace=True)
Thanks for your help in advance!
Use GroupBy.nth for avoid duplicates if group with length is 1:
df = pd.DataFrame({
'a':[5,3,6,9,2,4],
'Path':list('aaabbc')
})
print(df)
a Path
0 5 a
1 3 a
2 6 a
3 9 b
4 2 b
5 4 c
df = df.groupby('Path').nth([0, -1])
print (df)
a
Path
a 5
a 6
b 9
b 2
c 4
**Using group by.nth which is an Updated code from previous solution to get nth entry
def keep_second_dup(duplicate):
duplicate[Columnname]=duplicate[Columnname'].value_counts()
second_duplicate=duplicate[duplicate['Count']>=1]
residual=duplicate[duplicate['Count']==1]
sec=second_duplicated.groupby([Columnname]).nth([1]).reset_index()
final_data=pd.concat([sec,residual])
final_data.drop('Count',axis=1,inplace=True)
return final_data
I have this Python Pandas DataFrame DF :
DICT = { 'letter': ['A','B','C','A','B','C','A','B','C'],
'number': [1,1,1,2,2,2,3,3,3],
'word' : ['one','two','three','three','two','one','two','one','three']}
DF = pd.DataFrame(DICT)
Which looks like :
letter number word
0 A 1 one
1 B 1 two
2 C 1 three
3 A 2 three
4 B 2 two
5 C 2 one
6 A 3 two
7 B 3 one
8 C 3 three
And I want to extract the lines
letter number word
A 1 one
B 2 two
C 3 three
First I tired :
DF[(DF['letter'].isin(("A","B","C"))) &
DF['number'].isin((1,2,3)) &
DF['word'].isin(('one','two','three'))]
Of course it didn't work, and everything has been selected
Then I tested :
Bool = DF[['letter','number','word']].isin(("A",1,"one"))
DF[np.all(Bool,axis=1)]
Good, it works ! but only for one line ...
If we take the next step and give an iterable to .isin() :
Bool = DF[['letter','number','word']].isin((("A",1,"one"),
("B",2,"two"),
("C",3,"three")))
Then it fails, the Boolean array is full of False ...
What I'm doing wrong ? Is there a more elegant way to do this selection based on several columns ?
(Anyway, I want to avoid a for loop, because the real DataFrames I'm using are really big, so I'm looking for the fastest optimal way to do the job)
Idea is create new DataFrame with all triple values and then merge with original DataFrame:
L = [("A",1,"one"),
("B",2,"two"),
("C",3,"three")]
df1 = pd.DataFrame(L, columns=['letter','number','word'])
print (df1)
letter number word
0 A 1 one
1 B 2 two
2 C 3 three
df = DF.merge(df1)
print (df)
letter number word
0 A 1 one
1 B 2 two
2 C 3 three
Another idea is create list of tuples, convert to Series and then compare by isin:
s = pd.Series(list(map(tuple, DF[['letter','number','word']].values.tolist())),index=DF.index)
df1 = DF[s.isin(L)]
print (df1)
letter number word
0 A 1 one
4 B 2 two
8 C 3 three