I am trying to merge two dataframes, one with columns: customerId, full name, and emails and the other dataframe with columns: customerId, amount, and date. I want to have the first dataframe be the main dataframe and the other dataframe information be included but only if the customerIds match up; I tried doing:
merge = pd.merge(df, df2, on='customerId', how='left')
but the dataframe that is produced contains a lot of repeats and looks wrong:
customerId full name emails amount date
0 002963338 Star shine star.shine#cdw.com $2,910.94 2016-06-14
1 002963338 Star shine star.shine#cdw.com $9,067.70 2016-05-27
2 002963338 Star shine star.shine#cdw.com $6,507.24 2016-04-12
3 002963338 Star shine star.shine#cdw.com $1,457.99 2016-02-24
4 986423367 palm tree tree.palm#snapchat.com,tree#.com $4,604.83 2016-07-16
this cant be right, please help!
There is problem you have duplicates in customerId column.
So solution is remove them, e.g. by drop_duplicates:
df2 = df2.drop_duplicates('customerId')
Sample:
df = pd.DataFrame({'customerId':[1,2,1,1,2], 'full name':list('abcde')})
print (df)
customerId full name
0 1 a
1 2 b
2 1 c
3 1 d
4 2 e
df2 = pd.DataFrame({'customerId':[1,2,1,2,1,1], 'full name':list('ABCDEF')})
print (df2)
customerId full name
0 1 A
1 2 B
2 1 C
3 2 D
4 1 E
5 1 F
merge = pd.merge(df, df2, on='customerId', how='left')
print (merge)
customerId full name_x full name_y
0 1 a A
1 1 a C
2 1 a E
3 1 a F
4 2 b B
5 2 b D
6 1 c A
7 1 c C
8 1 c E
9 1 c F
10 1 d A
11 1 d C
12 1 d E
13 1 d F
14 2 e B
15 2 e D
df2 = df2.drop_duplicates('customerId')
merge = pd.merge(df, df2, on='customerId', how='left')
print (merge)
customerId full name_x full name_y
0 1 a A
1 2 b B
2 1 c A
3 1 d A
4 2 e B
I do not see repeats as a whole row but there are repetetions in customerId. You could remove them using:
df.drop_duplicates('customerId', inplace = 1)
where df could be the dataframe corresponding to amount or one obtained post merge. In case you want fewer rows (say n), you could use:
df.groupby('customerId).head(n)
Related
I have two Dataframes.
print(df1)
key value
0 A 2
1 B 3
2 C 2
3 D 3
print(df2)
key value
0 B 3
1 D 1
2 E 1
3 F 3
What I want is for it to do a outer merge on key and pick whichever value is not NaN.
Which one it choses if both are int (or float) is not that important. The mean would be a nice touch though.
print(df3)
key value
0 A 2
1 B 3
3 C 2
4 D 2
5 E 1
6 F 3
I tried:
df3 = df1.merge(df2, on='key', how='outer')
but it generates 2 new columns. I could just do my calculations after, but am sure there is an easier solution, that I just could not find.
Thanks for your help.
This works for me, the duplicates are dropped in order of the dataframe entry, so the dupes from df1 are dropped and df2 are kept, if any keys don't match the duplicate key or both happen to be na we can drop them .dropna()
dfs = pd.concat([df1,df2]).drop_duplicates(subset=['key'],keep='last').dropna(how='any')
key value
0 A 2
2 C 2
3 D 3
0 B 3
1 D 1
2 E 1
3 F 3
I am using python to merge two dataframe:
join=pd.merge(df1,df2,on=["A","B"],how="left")
Table 1:
A B
a 1
b 2
c 3
Table 2:
A B Flag C
a 1 0 20
b 2 1 40
c 3 0 60
a 1 1 80
b 2 0 10
The result that I get after left join is:
A B Flag C
a 1 0 20
a 1 1 80
b 2 1 40
b 2 0 10
c 3 0 60
Here we see row 1 and row 2 has come twice because of table 2. I want to keep just one row based on Flag column. I want to keep one of the two rows whose Falg value is `= 1
So Final Expected output is:
A B Flag C
a 1 1 80
b 2 1 40
c 3 0 60
Is there any pythonic way to do it?
# raise preferred lines to the top
df2 = df2.sort_values(by='Flag', ascending=False)
# deduplicate
df2 = df2.drop_duplicates(subset=['A','B'], keep='first')
# merge
pd.merge(df1, df2, on=['A','B'])
A B Flag C
0 a 1 1 80
1 b 2 1 40
2 c 3 0 60
The concept is similar to what you would do on SQL: separate a table with the selection criterea (in this case maximums for flag), leaving enough columns to match an observation on the joint table.
join = pd.merge(df1, df2, how="left").reset_index()
maximums = join.groupby(by='A').max()
join = pd.merge(join, maximums, on=['Flag', 'A'])
Try using this join:
join=pd.merge(df1,df2,on=["A","B"],how="left", left_index=True, right_index=True)
print(join)
I have imported the following Excel file but would like to sort it based on Frequency descending, but then with 'Other','No data' and 'All' (the total) at the bottom in that order. Is this possible?
table1 = pd.read_excel("table1.xlsx")
table1
Use:
df = pd.DataFrame({
'generalenq':list('abcdef'),
'percentage':[1,3,5,7,1,0],
'frequency':[5,3,6,9,2,4],
})
df.loc[0, 'generalenq'] = 'All'
df.loc[2, 'generalenq'] = 'No data'
df.loc[3, 'generalenq'] = 'Other'
print (df)
generalenq percentage frequency
0 All 1 5
1 b 3 3
2 No data 5 6
3 Other 7 9
4 e 1 2
5 f 0 4
First create dictionary for ordering by some integers. Then create mask by membership with Series.isin and sorting non matched rows selected with ~ for invert mask with boolean indexing:
d = {'Other':0,'No data':1,'All':2}
mask = df['generalenq'].isin(list(d.keys()))
df1 = df[~mask].sort_values('frequency', ascending=False)
print (df1)
generalenq percentage frequency
5 f 0 4
1 b 3 3
4 e 1 2
Then filter matched rows by mask and create helper column for sorting by mapped dict:
df2 = df[mask].assign(new = lambda x: x['generalenq'].map(d)).sort_values('new').drop('new', 1)
print (df2)
generalenq percentage frequency
3 Other 7 9
2 No data 5 6
0 All 1 5
And last join together by concat:
df = pd.concat([df1, df2], ignore_index=True)
print (df)
generalenq percentage frequency
0 f 0 4
1 b 3 3
2 e 1 2
3 Other 7 9
4 No data 5 6
5 All 1 5
Right now I have two dataframes (data1 and data2)
I would like to print a column of string values in the dataframe called data1, based on whether the ID exists in both data2 and data1.
What I am doing now gives me a boolean list (True or False if the ID exists in the both dataframes but not the column of strings).
print(data2['id'].isin(data1.id).to_string())
yields
0 True
1 True
2 True
3 True
4 True
5 True
Any ideas would be appreciated.
Here is a sample of data1
'user_id', 'id', 'rating', 'unix_timestamp'
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
And data2 contains something like this
'id', 'title', 'release_date',
'video_release_date', 'imdb_url'
37|Nadja (1994)|01-Jan-1994||http://us.imdb.com/M/title-exact?Nadja%20(1994)|0|0|0|0|0|0|0|0|1|0|0|0|0|0|0|0|0|0|0
38|Net, The (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Net,%20The%20(1995)|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|0|0
39|Strange Days (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Strange%20Days%20(1995)|0|1|0|0|0|0|1|0|0|0|0|0|0|0|0|1|0|0|0
If all values of ids are unique:
I think you need merge with inner join. For data2 select only id column, on parameter should be omit, because joining on all columns - here only id:
df = pd.merge(data1, data2[['id']])
Sample:
data1 = pd.DataFrame({'id':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3]})
print (data1)
B C id
0 4 7 a
1 5 8 b
2 4 9 c
3 5 4 d
4 5 2 e
5 4 3 f
data2 = pd.DataFrame({'id':list('frcdeg'),
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],})
print (data2)
D E id
0 1 5 f
1 3 3 r
2 5 6 c
3 7 9 d
4 1 2 e
5 0 4 g
df = pd.merge(data1, data2[['id']])
print (df)
B C id
0 4 9 c
1 5 4 d
2 5 2 e
3 4 3 f
If id are duplicated in one or another Dataframe use another answer, also added similar solutions:
df = data1[data1['id'].isin(set(data1['id']) & set(data2['id']))]
ids = set(data1['id']) & set(data2['id'])
df = data2.query('id in #ids')
df = data1[np.in1d(data1['id'], np.intersect1d(data1['id'], data2['id']))]
Sample:
data1 = pd.DataFrame({'id':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3]})
print (data1)
B C id
0 4 7 a
1 5 8 b
2 4 9 c
3 5 4 d
4 5 2 e
5 4 3 f
data2 = pd.DataFrame({'id':list('fecdef'),
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],})
print (data2)
D E id
0 1 5 f
1 3 3 e
2 5 6 c
3 7 9 d
4 1 2 e
5 0 4 f
df = data1[data1['id'].isin(set(data1['id']) & set(data2['id']))]
print (df)
B C id
2 4 9 c
3 5 4 d
4 5 2 e
5 4 3 f
EDIT:
You can use:
df = data2.loc[data1['id'].isin(set(data1['id']) & set(data2['id'])), ['title']]
ids = set(data1['id']) & set(data2['id'])
df = data2.query('id in #ids')[['title']]
df = data2.loc[np.in1d(data1['id'], np.intersect1d(data1['id'], data2['id'])), ['title']]
You can compute the set intersection of the two columns -
ids = set(data1['id']).intersection(data2['id'])
Or,
ids = np.intersect1d(data1['id'], data2['id'])
Next, query/filter out relevant rows.
data1.loc[data1['id'].isin(ids), 'id']
I have a Pandas dataframe that contains a grouping variable. I would like to merge each group with other dataframes based on the contents of one of the columns. So, for example, I have a dataframe, dfA, which can be defined as:
dfA = pd.DataFrame({'a':[1,2,3,4,5,6],
'b':[0,1,0,0,1,1],
'c':['a','b','c','d','e','f']})
a b c
0 1 0 a
1 2 1 b
2 3 0 c
3 4 0 d
4 5 1 e
5 6 1 f
Two other dataframes, dfB and dfC, contain a common column ('a') and an extra column ('d') and can be defined as:
dfB = pd.DataFrame({'a':[1,2,3],
'd':[11,12,13]})
a d
0 1 11
1 2 12
2 3 13
dfC = pd.DataFrame({'a':[4,5,6],
'd':[21,22,23]})
a d
0 4 21
1 5 22
2 6 23
I would like to be able to split dfA based on column 'b' and merge one of the groups with dfB and the other group with dfC to produce an output that looks like:
a b c d
0 1 0 a 11
1 2 1 b 12
2 3 0 c 13
3 4 0 d 21
4 5 1 e 22
5 6 1 f 23
In this simplified version, I could concatenate dfB and dfC and merge with dfA without splitting into groups as shown below:
dfX = pd.concat([dfB,dfC])
dfA = dfA.merge(dfX,on='a',how='left')
print(dfA)
a b c d
0 1 0 a 11
1 2 1 b 12
2 3 0 c 13
3 4 0 d 21
4 5 1 e 22
5 6 1 f 23
However, in the real-world situation, the smaller dataframes will be generated from multiple different complex sources; generating the dataframes and combining into a single dataframe beforehand may not be feasible because there may be overlapping data on the column that will be used for merging the dataframes (but this will be avoided if the dataframe can be split based on the grouping variable). Is it possible to use Pandas groupby() method to do this instead? I was thinking of something like the following (which doesn't work, perhaps because I'm not combining the groups into a new dataframe correctly):
grouped = dfA.groupby('b')
for name, group in grouped:
if name == 0:
group = group.merge(dfB,on='a',how='left')
elif name == 1:
group = group.merge(dfC,on='a',how='left')
Any thoughts would be appreciated.
This will fix your code
l=[]
grouped = dfA.groupby('b')
for name, group in grouped:
if name == 0:
group = group.merge(dfB,on='a',how='left')
elif name == 1:
group = group.merge(dfC,on='a',how='left')
l.append(group)
pd.concat(l)
Out[215]:
a b c d
0 1 0 a 11.0
1 3 0 c 13.0
2 4 0 d NaN
0 2 1 b NaN
1 5 1 e 22.0
2 6 1 f 23.0