Given dataset 1
name,x,y
st. peter,1,2
big university portland,3,4
and dataset 2
name,x,y
saint peter3,4
uni portland,5,6
The goal is to merge on
d1.merge(d2, on="name", how="left")
There are no exact matches on name though. So I'm looking to do a kind of fuzzy matching. The technique does not matter in this case, more how to incorporate it efficiently into pandas.
For example, st. peter might match saint peter in the other, but big university portland might be too much of a deviation that we wouldn't match it with uni portland.
One way to think of it is to allow joining with the lowest Levenshtein distance, but only if it is below 5 edits (st. --> saint is 4).
The resulting dataframe should only contain the row st. peter, and contain both "name" variations, and both x and y variables.
Is there a way to do this kind of merging using pandas?
Did you look at fuzzywuzzy?
You might do something like:
import pandas as pd
import fuzzywuzzy.process as fwp
choices = list(df2.name)
def fmatch(row):
minscore=95 #or whatever score works for you
choice,score = fwp.extractOne(row.name,choices)
return choice if score > minscore else None
df1['df2_name'] = df1.apply(fmatch,axis=1)
merged = pd.merge(df1,
df2,
left_on='df2_name',
right_on='name',
suffixes=['_df1','_df2'],
how = 'outer') # assuming you want to keep unmatched records
Caveat Emptor: I haven't tried to run this.
Let's say you have that function which returns the best match if any, None otherwise:
def best_match(s, candidates):
''' Return the item in candidates that best matches s.
Will return None if a good enough match is not found.
'''
# Some code here.
Then you can join on the values returned by it, but you can do it in different ways that would lead to different output (so I think, I did not look much at this issue):
(df1.assign(name=df1['name'].apply(lambda x: best_match(x, df2['name'])))
.merge(df2, on='name', how='left'))
(df1.merge(df2.assign(name=df2['name'].apply(lambda x: best_match(x, df1['name'])))),
on='name', how='left'))
The simplest idea I can get now is to create special dataframe with distances between all names:
>>> from Levenshtein import distance
>>> df1['dummy'] = 1
>>> df2['dummy'] = 1
>>> merger = pd.merge(df1, df2, on=['dummy'], suffixes=['1','2'])[['name1','name2', 'x2', 'y2']]
>>> merger
name1 name2 x2 y2
0 st. peter saint peter 3 4
1 st. peter uni portland 5 6
2 big university portland saint peter 3 4
3 big university portland uni portland 5 6
>>> merger['res'] = merger.apply(lambda x: distance(x['name1'], x['name2']), axis=1)
>>> merger
name1 name2 x2 y2 res
0 st. peter saint peter 3 4 4
1 st. peter uni portland 5 6 9
2 big university portland saint peter 3 4 18
3 big university portland uni portland 5 6 11
>>> merger = merger[merger['res'] <= 5]
>>> merger
name1 name2 x2 y2 res
0 st. peter saint peter 3 4 4
>>> del df1['dummy']
>>> del merger['res']
>>> pd.merge(df1, merger, how='left', left_on='name', right_on='name1')
name x y name1 name2 x2 y2
0 st. peter 1 2 st. peter saint peter 3 4
1 big university portland 3 4 NaN NaN NaN NaN
Related
I have a folder with many csv files. They all look similar, they all have the same names for columns and rows. They all have strings as values in their cells. I want to concatenate them along columns AND rows so that all the strings are concatenated into their respective cells.
Example:
file1.csv
0
1
2
3
4
b1
peter
house
ash
plane
b2
carl
horse
paul
knife
b3
mary
apple
linda
carrot
b4
hank
car
herb
beer
file2.csv
0
1
2
3
4
b1
mark
green
hello
band
b2
no
phone
spoon
goodbye
b3
red
cherry
charly
hammer
b4
good
yes
ok
simon
What I want is this result with no delimiter between the string values:
concatenated.csv
0
1
2
3
4
b1
peter mark
house green
ash hello
plane band
b2
carl no
horse phone
paul spoon
knife goodbye
b3
mary red
apple cherry
linda charly
carrot hammer
b4
hank good
car yes
herb ok
beer simon
And I don't know how to do this in pandas in a jupyter notebook.
I have tried a couple of things but all of them either kept a seperate set of rows or of columns.
If these are your dataframes:
df1_data = {
1 : ['peter', 'carl', 'mary', 'hank'],
2 : ['house', 'horse','apple', 'car']
}
df1 = pd.DataFrame(df1_data)
df2_data = {
1 : ['mark', 'no', 'red', 'good'],
2 : ['green','phone','cherry','yes' ]
}
df2 = pd.DataFrame(df2_data)
df1:
1 2
0 peter house
1 carl horse
2 mary apple
3 hank car
df2:
1 2
0 mark green
1 no phone
2 red cherry
3 good yes
You can reach your requested dataframe like this:
df = pd.DataFrame()
df[1] = df1[1] + ' ' + df2[1]
df[2] = df1[2] + ' ' + df2[2]
print(df)
Output:
1 2
0 peter mark house green
1 carl no horse phone
2 mary red apple cherry
3 hank good car yes
Loop for csv files:
Now, if you have a lot of csv files with names like file1.csv and file2.csv and so on, you can save them all in d like this:
d = {}
for i in range(1,#N):
d[i] = pd.read_csv('.../file'+str(i)+'.csv')
#N is the number of csv files. (because I started from 1, you have to add 1 to N)
And build the dataframe you want like this:
concatenated_df = pd.DataFrame()
for i in range(1,#N):
concatenated_df[i] = d[1].iloc[:,i] + ' ' + d[2].iloc[:,i] + ...
#N is the number of columns here.
If performance is not an issue, you can use pandas.DataFrame.applymap with pandas.Series.add :
out = df1[[0]].join(df1.iloc[:, 1:].applymap(lambda v: f"{v} ").add(df2.iloc[:, 1:]))
Or, for a large dataset, you can use pandas.concat with a listcomp :
out = (
df1[[0]]
.join(pd.concat([df1.merge(df2, on=0)
.filter(regex=f"{p}_\w").agg(" ".join, axis=1)
.rename(idx) for idx, p in enumerate(range(1, len(df1.columns)), start=1)],
axis=1))
)
Output :
print(out)
0 1 2 3 4
0 b1 peter mark house green ash hello plane band
1 b2 carl no horse phone paul spoon knife goodbye
2 b3 mary red apple cherry linda charly carrot hammer
3 b4 hank good car yes herb ok beer simon
Reading many csv files into a single DF is a pretty common answer, and is the first part of your question. You can find a suitable answer here.
After that, in an effort to allow you to perform this on all of the files at the same time, you can melt and pivot with a custom agg function like so:
import glob
import pandas as pd
# See the linked answer if you need help finding csv files in a different directory
all_files = glob.glob('*.csv'))
df = pd.concat((pd.read_csv(f) for f in all_files))
output = df.melt(id_vars='0')
.pivot_table(index='0',
columns='variable',
values='value',
aggfunc=lambda x: ' '.join(x))
I have a dataset in which I add coordinates to cities based on zip-codes but several of these zip-codes are missing. Also, in some cases cities are missing, states are missing, or both are missing. For example:
ca_df[['OWNER_CITY', 'OWNER_STATE', 'OWNER_ZIP']]
OWNER_CITY OWNER_STATE OWNER_ZIP
495 MIAMI SHORE PA
496 SEATTLE
However, a second dataset has city, state & the matching zip-codes. This one is complete without any missing values.
df_coord.head()
OWNER_ZIP CITY STATE
0 71937 Cove AR
1 72044 Edgemont AR
2 56171 Sherburn MN
I want to fill in the missing zip-codes in the first dataframe if:
Zip-code is empty
City is present
State is present
This is an all-or-nothing operations means, either all three criteria are met and the zip-code gets filled or nothing changes.
However, this is a fairly large dataset with > 50 million records so ideally I want to vectorize the operation by working column-wise.
Technically, that would fit np.where but as far as I know, np.where only takes of condition in the following format:
df1['OWNER_ZIP'] = np.where(df["cond"] ==X, df_coord['OWNER_ZIP'], "")
How do I ensure I only fill missing zip-codes when all conditions are met?
Given ca_df:
OWNER_CITY OWNER_STATE OWNER_ZIP
0 Miami Shore Florida 111
1 Los Angeles California NaN
2 Houston NaN NaN
and df_coord:
OWNER_ZIP CITY STATE
0 111 Miami Shore Florida
1 222 Los Angeles California
2 333 Houston Texas
You can use pd.notna along with pd.DataFrame#index like this:
inferrable_zips_df = pd.notna(ca_df["OWNER_CITY"]) & pd.notna(ca_df["OWNER_STATE"])
is_inferrable_zip = ca_df.index.isin(df_coord[inferrable_zips_df].index)
ca_df.loc[is_inferrable_zip, "OWNER_ZIP"] = df_coord["OWNER_ZIP"]
with ca_df resulting as:
OWNER_CITY OWNER_STATE OWNER_ZIP
0 Miami Shore Florida 111
1 Los Angeles California 222
2 Houston NaN NaN
I've changed the "" to np.nan, but if you still wish to use "" then you just need to change pd.notna(ca_df[...]) to ca_df[...] == "".
You can combine numpy.where statements to combine multiple rules. This should give you the array of row indices which abide to each of the three rules:
np.where(df["OWNER_ZIP"] == X) and np.where(df["CITY"] == Y) and np.where(df["STATE"] == Z)
Use:
print (df_coord)
OWNER_ZIP CITY STATE
0 71937 Cove AR
1 72044 Edgemont AR
2 56171 Sherburn MN
3 123 MIAMI SHORE PA
4 789 SEATTLE AA
print (ca_df)
OWNER_ZIP OWNER_CITY OWNER_STATE
0 NaN NaN NaN
1 72044 Edgemont AR
2 56171 NaN MN
3 NaN MIAMI SHORE PA
4 NaN SEATTLE NaN
First is necessary test if same dtypes in columns matching:
#or convert ca_df['OWNER_ZIP'] to integers
df_coord['OWNER_ZIP'] = df_coord['OWNER_ZIP'].astype(str)
print (df_coord.dtypes)
OWNER_ZIP object
CITY object
STATE object
dtype: object
print (ca_df.dtypes)
OWNER_ZIP object
OWNER_CITY object
OWNER_STATE object
dtype: object
Then filter for each combinations of columns - missing and non missing values and add new data by merge, then convert index to same like filtered data and assign back:
mask1 = ca_df['OWNER_CITY'].notna() & ca_df['OWNER_STATE'].notna() & ca_df['OWNER_ZIP'].isna()
df1 = ca_df[mask1].drop('OWNER_ZIP', axis=1).merge(df_coord.rename(columns={'CITY':'OWNER_CITY','STATE':'OWNER_STATE'})).set_index(ca_df.index[mask1])
ca_df.loc[mask1, ['OWNER_ZIP','OWNER_CITY','OWNER_STATE']] = df1
mask2 = ca_df['OWNER_CITY'].notna() & ca_df['OWNER_STATE'].isna() & ca_df['OWNER_ZIP'].isna()
df2 = ca_df[mask2].drop(['OWNER_ZIP','OWNER_STATE'], axis=1).merge(df_coord.rename(columns={'CITY':'OWNER_CITY','STATE':'OWNER_STATE'})).set_index(ca_df.index[mask2])
ca_df.loc[mask2, ['OWNER_ZIP','OWNER_CITY','OWNER_STATE']] = df2
mask3 = ca_df['OWNER_CITY'].isna() & ca_df['OWNER_STATE'].notna() & ca_df['OWNER_ZIP'].notna()
df3 = ca_df[mask3].drop(['OWNER_CITY'], axis=1).merge(df_coord.rename(columns={'CITY':'OWNER_CITY','STATE':'OWNER_STATE'})).set_index(ca_df.index[mask3])
ca_df.loc[mask3, ['OWNER_ZIP','OWNER_CITY','OWNER_STATE']] = df3
print (ca_df)
OWNER_ZIP OWNER_CITY OWNER_STATE
0 NaN NaN NaN
1 72044 Edgemont AR
2 56171 Sherburn MN
3 123 MIAMI SHORE PA
4 789 SEATTLE AA
You can do a left join on these dataframes considering join on the columns 'city' and 'state'. That would give you the zip-code corresponding to a city and state if both values are non-null in the first dataframe (OWNER_CITY, OWNER_STATE, OWNER_ZIP) and since it would be a left join, it would also preserve your rows which either don't have a zip-code or have null/empty city and state values.
If i have dataset like this:
id person_name salary
0 [alexander, william, smith] 45000
1 [smith, robert, gates] 65000
2 [bob, alexander] 56000
3 [robert, william] 80000
4 [alexander, gates] 70000
If we sum that salary column then we will get 316000
I really want to know how much person who named 'alexander, smith, etc' (in distinct) makes in salary if we sum all of the salaries from its splitting name in this dataset (that contains same string value).
output:
group sum_salary
alexander 171000 #sum from id 0 + 2 + 4 (which contain 'alexander')
william 125000 #sum from id 0 + 3
smith 110000 #sum from id 0 + 1
robert 145000 #sum from id 1 + 3
gates 135000 #sum from id 1 + 4
bob 56000 #sum from id 2
as we see the sum of sum_salary columns is not the same as the initial dataset. all because the function requires double counting.
I thought it seems familiar like string count, but what makes me confuse is the way we use aggregation function. I've tried creating a new list of distinct value in person_name columns, then stuck comes.
Any help is appreciated, Thank you very much
Solutions working with lists in column person_name:
#if necessary
#df['person_name'] = df['person_name'].str.strip('[]').str.split(', ')
print (type(df.loc[0, 'person_name']))
<class 'list'>
First idea is use defaultdict for store sumed values in loop:
from collections import defaultdict
d = defaultdict(int)
for p, s in zip(df['person_name'], df['salary']):
for x in p:
d[x] += int(s)
print (d)
defaultdict(<class 'int'>, {'alexander': 171000,
'william': 125000,
'smith': 110000,
'robert': 145000,
'gates': 135000,
'bob': 56000})
And then:
df1 = pd.DataFrame({'group':list(d.keys()),
'sum_salary':list(d.values())})
print (df1)
group sum_salary
0 alexander 171000
1 william 125000
2 smith 110000
3 robert 145000
4 gates 135000
5 bob 56000
Another solution with repeating values by length of lists and aggregate sum:
from itertools import chain
df1 = pd.DataFrame({
'group' : list(chain.from_iterable(df['person_name'].tolist())),
'sum_salary' : df['salary'].values.repeat(df['person_name'].str.len())
})
df2 = df1.groupby('group', as_index=False, sort=False)['sum_salary'].sum()
print (df2)
group sum_salary
0 alexander 171000
1 william 125000
2 smith 110000
3 robert 145000
4 gates 135000
5 bob 56000
Another sol:
df_new=(pd.DataFrame({'person_name':np.concatenate(df.person_name.values),
'salary':df.salary.repeat(df.person_name.str.len())}))
print(df_new.groupby('person_name')['salary'].sum().reset_index())
person_name salary
0 alexander 171000
1 bob 56000
2 gates 135000
3 robert 145000
4 smith 110000
5 william 125000
Can be done concisely with dummies though performance will suffer due to all of the .str methods:
df.person_name.str.join('*').str.get_dummies('*').multiply(df.salary, 0).sum()
#alexander 171000
#bob 56000
#gates 135000
#robert 145000
#smith 110000
#william 125000
#dtype: int64
I parsed this as strings of lists, by copying OP's data and using pandas.read_clipboard(). In case this was indeed the case (a series of strings of lists), this solution would work:
df = df.merge(df.person_name.str.split(',', expand=True), left_index=True, right_index=True)
df = df[[0, 1, 2, 'salary']].melt(id_vars = 'salary').drop(columns='variable')
# Some cleaning up, then a simple groupby
df.value = df.value.str.replace('[', '')
df.value = df.value.str.replace(']', '')
df.value = df.value.str.replace(' ', '')
df.groupby('value')['salary'].sum()
Output:
value
alexander 171000
bob 56000
gates 135000
robert 145000
smith 110000
william 125000
Another way you can do this is with iterrows(). This will not be as fast jezraels solution. But it works:
ids = []
names = []
salarys = []
# Iterate over the rows and extract the names from the lists in person_name column
for ix, row in df.iterrows():
for name in row['person_name']:
ids.append(row['id'])
names.append(name)
salarys.append(row['salary'])
# Create a new 'unnested' dataframe
df_new = pd.DataFrame({'id':ids,
'names':names,
'salary':salarys})
# Groupby on person_name and get the sum
print(df_new.groupby('names').salary.sum().reset_index())
Output
names salary
0 alexander 171000
1 bob 56000
2 gates 135000
3 robert 145000
4 smith 110000
5 william 125000
I´m doing some resarch on a dataframe for people that are relative. But I can´t manage when I find brothers, I can´t find a way to write them down all on a specific column. Here follow an example:
cols = ['Name','Father','Brother']
df = pd.DataFrame({'Brother':'',
'Father':['Erick Moon','Ralph Docker','Erick Moon','Stewart Adborn'],
'Name':['John Smith','Rodolph Ruppert','Mathew Common',"Patrick French"]
},columns=cols)
df
Name Father Brother
0 John Smith Erick Moon
1 Rodolph Ruppert Ralph Docker
2 Mathew Common Erick Moon
3 Patrick French Stewart Adborn
What I want is this:
Name Father Brother
0 John Smith Erick Moon Mathew Common
1 Rodolph Ruppert Ralph Docker
2 Mathew Common Erick Moon John Smith
3 Patrick French Stewart Adborn
I apreciate any help!
Here is an idea you can try, firstly create a Brother column with all brothers as a list including itself and then remove itself separately. The code could probably be optimized but where you can start from:
import numpy as np
import pandas as pd
df['Brother'] = df.groupby('Father')['Name'].transform(lambda g: [g.values])
def deleteSelf(row):
row.Brother = np.delete(row.Brother, np.where(row.Brother == row.Name))
return(row)
df.apply(deleteSelf, axis = 1)
# Name Father Brother
# 0 John Smith Erick Moon [Mathew Common]
# 1 Rodolph Ruppert Ralph Docker []
# 2 Mathew Common Erick Moon [John Smith]
# 3 Patrick French Stewart Adborn []
def same_father(me, data):
hasdad = data.Father == data.at[me, 'Father']
notme = data.index != me
isbro = hasdad & notme
return data.loc[isbro].index.tolist()
df2 = df.set_index('Name')
getbro = lambda x: same_father(x.name, df2)
df2['Brother'] = df2.apply(getbro, axis=1)
I think this should work.(untested)
First of all I am very new at pandas and am trying to lean so thorough answers will be appreciated.
I want to generate a pandas DataFrame representing a map witter tag subtoken -> poster where tag subtoken means anything in the set {hashtagA} U {i | i in split('_', hashtagA)} from a table matching poster -> tweet
For example:
In [1]: df = pd.DataFrame([["jim", "i was like #yolo_omg to her"], ["jack", "You are so #yes_omg #best_place_ever"], ["neil", "Yo #rofl_so_funny"]])
In [2]: df
Out[2]:
0 1
0 jim i was like #yolo_omg to her
1 jack You are so #yes_omg #best_place_ever
2 neil Yo #rofl_so_funny
And from that I want to get something like
0 1
0 jim yolo_omg
1 jim yolo
2 jim omg
3 jack yes_omg
4 jack yes
5 jack omg
6 jack best_place_ever
7 jack best
8 jack place
9 jack ever
10 neil rofl_so_funny
11 neil rofl
12 neil so
13 neil funny
I managed to construct this mostrosity that actually does the job:
In [143]: df[1].str.findall('#([^\s]+)') \
.apply(pd.Series).stack() \
.apply(lambda s: [s] + s.split('_') if '_' in s else [s]) \
.apply(pd.Series).stack().to_frame().reset_index(level=0) \
.join(df, on='level_0', how='right', lsuffix='_l')[['0','0_l']]
Out[143]:
0 0_l
0 0 jim yolo_omg
1 jim yolo
2 jim omg
0 jack yes_omg
1 jack yes
2 jack omg
1 0 jack best_place_ever
1 jack best
2 jack place
3 jack ever
0 0 neil rofl_so_funny
1 neil rofl
2 neil so
3 neil funny
But I have a very strong feeling that there are much better ways of doing this, especially given that the real dataset set is huge.
pandas indeed has a function for doing this natively.
Series.str.findall()
This basically applies a regex and captures the group(s) you specify in it.
So if I had your dataframe:
df = pd.DataFrame([["jim", "i was like #yolo_omg to her"], ["jack", "You are so #yes_omg #best_place_ever"], ["neil", "Yo #rofl_so_funny"]])
What I would do is first to set the names of your columns, like this:
df.columns = ['user', 'tweet']
Or do it on creation of the dataframe:
df = pd.DataFrame([["jim", "i was like #yolo_omg to her"], ["jack", "You are so #yes_omg #best_place_ever"], ["neil", "Yo #rofl_so_funny"]], columns=['user', 'tweet'])
Then I would simply apply the extract function with a regex:
df['tag'] = df["tweet"].str.findall("(#[^ ]*)")
And I would use the negative character group instead of a positive one, this is more likely to survive special cases.
How about using list comprehensions in python and then reverting back to pandas? Requires a few lines of code but is perhaps more readable.
import re
get the hash tags
tags = [re.findall('#([^\s]+)', t) for t in df[1]]
make lists of the tags with subtokens for each user
st = [[t] + [s.split('_') for s in t] for t in tags]
subtokens = [[i for s in poster for i in s] for poster in st]
put back into DataFrame with poster names
df2 = pd.DataFrame(subtokens, index=df[0]).stack()
In [250]: df2
Out[250]:
jim 0 yolo_omg
1 yolo
2 omg
jack 0 yes_omg
1 best_place_ever
2 yes
3 omg
4 best
5 place
6 ever
neil 0 rofl_so_funny
1 rofl
2 so
3 funny
dtype: object