I have three columns in my data frame:
CaseID
FirstName
LastName
1
rohit
pandey
2
rai
3
In the output, I am trying to add the fourth column and have values as LastName,FirstName
I have this Python code
df_ids['ContactName'] = df_ids[['LastName', 'FirstName']].agg(lambda x: ','.join(x.values), axis=1)
But it appends the blank values also which something like below that I am able to get like below:
CaseID
FirstName
LastName
ContactName
1
rohit
pandey
pandey, rohit
2
rai
, rai
3
,
The expected output:
CaseID
FirstName
LastName
ContactName
1
rohit
pandey
pandey, rohit
2
rai
rai
3
Someone has added PySpark tag. This is PySpark version:
from pyspark.sql import functions as F
df_ids = df_ids.replace('', None) # Replaces empty strings with nulls
df_ids = df_ids.withColumn('ContactName', F.concat_ws(', ', 'LastName', 'FirstName'))
df_ids = df_ids.fillna('') # Replaces nulls back to empty strings
df_ids.show()
# +------+---------+--------+-------------+
# |CaseID|FirstName|LastName| ContactName|
# +------+---------+--------+-------------+
# | 1| rohit| pandey|pandey, rohit|
# | 2| | rai| rai|
# | 3| | | |
# +------+---------+--------+-------------+
This is the easy way, using apply. apply takes each row one at a time and passes it to the given function.
import pandas as pd
data = [
[ 1, 'rohit', 'pandey' ],
[ 2, '', 'rai' ],
[ 3, '', '' ]
]
df = pd.DataFrame(data, columns=['CaseID', 'FirstName', 'LastName'] )
def fixup( row ):
if not row['LastName']:
return ''
if not row['FirstName']:
return row['LastName']
return row['LastName'] + ', ' + row['FirstName']
print(df)
df['Contact1'] = df.apply(fixup, axis=1)
print(df)
Output:
CaseID FirstName LastName
0 1 rohit pandey
1 2 rai
2 3
CaseID FirstName LastName Contact1
0 1 rohit pandey pandey, rohit
1 2 rai rai
2 3
Two (actually 1 and a half) other options, which are very close to your attempt:
df_ids['ContactName'] = (
df_ids[['LastName', 'FirstName']]
.agg(lambda row: ', '.join(name for name in row if name), axis=1)
)
or
df_ids['ContactName'] = (
df_ids[['LastName', 'FirstName']]
.agg(lambda row: ', '.join(filter(None, row)), axis=1)
)
In both version the ''s are filtered out:
Via a generator expression: The if name makes sure that '' isn't allowed, because its truth value is False - try print(bool('')).
By the built-in function filter() with the first argument set to None.
Related
In my one sheet Excel file that I created through my SQL, I have 3 columns that represent letter ratings. The rating values may differ between ratings 1, 2, and 3, but they can still be ranked with the same value.
I am trying to create a new column in my Excel file that can take these 3 letter ratings and pull the middle rating.
ranking | Rating_1 | Rating_2 | Rating_3 | NEW_COLUMN |
(1 lowest) | -------- | -------- | -------- | -------- |
3 | A+ | AA | Aa | middle(rating)|
2 | B+ | BB | Bb | middle(rating)|
1 | Fa | Fb | Fc | middle(rating)|
| -------- | -------- | -------- | --------- |
There are three scenarios I need to account for:
if all three ratings differ, pick the rating between rating_1, rating_2, and rating_3 that isn't the highest rating or the lowest rating
if all three ratings are the same, pick rating on rating_1
if 2 of the ratings are the same, but one is different, pick the minimum rating
I created a dataframe :
df = pd.DataFrame(
{"Rating_1": ["A+", "AA", "Aa"],
"Rating_2": ["B+", "BB", "Bb"],
"Rating_3": ["Fa", "Fb", "Fc"]}
)
df["NEW COLUMN"] = {insert logic here}
Or is it easier to create a new DF that filters down the the original DF?
With the fowllowing toy dataframe:
import pandas as pd
df = pd.DataFrame(
{
"Rating_1": ["A+", "Cc", "Aa"],
"Rating_2": ["AA", "Cc", "Aa"],
"Rating_3": ["BB", "Cc", "Bb"],
}
)
print(df)
# Output
Rating_1 Rating_2 Rating_3
0 A+ AA BB
1 Cc Cc Cc
2 Aa Aa Bb
Here is one way to do it using Python sets to check conditions:
# First condition
df["Middle_rating"] = df.apply(
lambda x: sorted([x["Rating_1"], x["Rating_2"], x["Rating_3"]])[1]
if len(set([x["Rating_1"], x["Rating_2"], x["Rating_3"]])) == 3
else "",
axis=1,
)
# Second condition
df["Middle_rating"] = df.apply(
lambda x: x["Rating_1"]
if len(set([x["Rating_1"], x["Rating_2"], x["Rating_3"]])) == 1
else x["Middle_rating"],
axis=1,
)
# Third condition
ratings = {
rating: i
for i, rating in enumerate(["A+", "AA", "Aa", "B+", "BB", "Bb", "C+", "CC", "Cc"])
} # ratings ordered from best (A+: 0) to worst (CC: 8)
df["Middle_rating"] = df.apply(
lambda x: max(x["Rating_1"], x["Rating_2"], x["Rating_3"])
if len(
set([ratings[x["Rating_1"]], ratings[x["Rating_2"]], ratings[x["Rating_3"]]])
)
== 2
else x["Middle_rating"],
axis=1,
)
Then:
print(df)
# Output
Rating_1 Rating_2 Rating_3 Middle_rating
0 A+ AA BB AA
1 Cc Cc Cc Cc
2 Aa Aa Bb Bb
Python 3.9 and Pandas 1.3.4
So here's the df:
1 First Name Last Name fullname
2 Freddie Mercury Freddie Mercury
3 John Lennon John Lennon
4 David Bowie David Bowie
5 John Doe
6 Joseph Joseph
7 Jovi Jovi
My piece of code currently just finds the fullname column is just First Name + Last Name.
I'm currently trying to filter for blank entries in the First Name column, Last Name column, and any "John Does" in the fullname column.
Current code:
import pandas as pd
df = pd.read_csv('file.csv', dtype=str, header=0)
df2 = pd.DataFrame(df, columns=['First Name', 'Last Name', 'fullname'])
df['fullname'] = (df[['First Name', 'Last Name']].fillna('').agg(' '.join, axis=1).str.strip().replace('', 'John Doe'))
df_sort = df2.loc[df2['First Name'] == " "] | df2.loc[df2['Last Name'] == " "] | df2.loc[df2['fullname'] == "John Doe"]
df.to_csv('file.csv', index=False)
df_sort.to_csv('missing names.csv', index=False)
Currently I am having the missing names write to a new file and outputs only this:
First Name Last Name fullname
Everything is empty under.
I would like for the output to be:
First Name Last Name fullname
John Doe
Joseph
Jovi
Replace possible missing values to empty string, compare and test if at least one value match in DataFrame.any:
df_sort = df2[df2[['First Name', 'Last Name']].fillna('').eq('').any(axis=1)]
Or if there are missing values use:
df_sort = df2[df2[['First Name', 'Last Name']].isna().any(axis=1)]
You don't need to use .loc at each condition but you need to add some ( ):
>>> df2[(df2['First Name'] == " ")
| (df2['Last Name'] == " ")
| (df2['fullname'] == "John Doe")]
First Name Last Name fullname
3 NaN NaN John Doe
4 Joseph NaN Joseph
5 NaN Jovi Jovi
Note: adapt your missing values to your code. Here I use == " " (blank space) like you but you can use == "" (empty string) or use .isna() (missing values).
I'm trying to rationalize a quite scrambled phonebook xls of several thousandth of records. Some fields are kind of merged with other and/or saved into the wrong column, while other filed are splitted through 2 or more ones... and so on. I'm trying to find the path of the main error and solve those through regex, placing the right record into right column.
An example:
DataFrame as df:
id
Name
SecondName
Surname
Title
Company
01
Marc
Gigio
ETC ltd
02
Piero (Four
Season
Restaurant
)
03
bubbu(Caterpilar)
04
gaby(ts Inc)
05
Pit(REV inc)
REV Inc
06
Pluto
In record 01: would nothing to do, but see how manage conditional exception as point 5.
In record 02: merge Name + SecondName + Surname , then extract from new string the name (Piero) to place in Name column while extract from same string the content of squared bracket and place it into Company Column
df['Nameall_tmp'] = df[Name]+' '+df[SecondName]+' '+df[Surname]+' '+df[Company]
df['Name_tmp'] = df[Nameall_tmp].str.extract(r'(.+)(.+')
df['Company_tmp'] = df[Nameall_tmp].str.extract(r'.*((.+))')
In record 03 and 04: is almost 02
In record 06:
df['Nameall_tmp'] = df[Name]+' '+df[SecondName]+' '+df[Surname]+' '+df[Company]
df['Name_tmp'] = df['Nameall_tmp'].str.extract(r'(.+)(.+')
df['Name_tmp']= np.where(df['Name_tmp'] == 'nan' , df['Name'],df['Name_tmp'] )
In this case np.where statement doesn't work like if then else, in order to check if df['Name_tmp'] is "nan", in the case, fill with original df['Name'] to eliminate "nan" from record,else take df['Name_tmp']. Any sugestion ?
Rough thinking here:
munge the "company" column so that: if it contains a legit company name,
add () to it. If not, keep original content
concat all columns into one conglomerate column
use 1 regex to sr.str.extract(rex) that single conglomerate column into desired columns again
anyways, following the rough thinking, I have at least reduced the problem into fine tunning a single regex:
df = pd.DataFrame(
columns=" index Name SecondName Surname Company ".split(),
data= [
[ 0, "Marc", np.nan, "Gigio", "ETC ltd", ],
[ 1, "Piero", "(four", "season", "restaurant)", ],
[ 2, "bubbu(caterpilar)", np.nan, np.nan, np.nan, ],
[ 3, np.nan, np.nan, np.nan, "gaby(ts inc)", ],
[ 4, "Pit(REV inc)", np.nan, np.nan, "REV inc", ],
[ 5, "pluto", np.nan, np.nan, np.nan, ],]).set_index("index", drop=True)
df = df.fillna('')
df['Company'] = df['Company'].apply(lambda x: f'({x})' if ('(' not in x and ')' not in x and x!="") else x)
# df['sum'] = df.sum(axis=1)
df['sum'] = df['Name'] + ' ' + df['SecondName'] + ' ' + df['Surname'] + ' ' + df['Company']
df['sum'] = df['sum'].str.replace(r'\s+', ' ', regex=True) # get rid of extra \s due to above concat
rex = re.compile( # very fragil and hardcoded,
r"""
(?P<name0>[a-z]{2,})
\s?
(?P<surename0>[a-z]{2,})?
\s?
\(?
(?P<company0>[a-z\s]{3,})?
\)?
\s?
""",
re.X+re.I
)
df['sum'].str.extract(rex)
output:
+---------+---------+-------------+------------------------+
| index | name0 | surename0 | company0 |
|---------+---------+-------------+------------------------|
| 0 | Marc | Gigio | ETC ltd |
| 1 | Piero | nan | four season restaurant |
| 2 | bubbu | nan | caterpilar |
| 3 | gaby | nan | ts inc |
| 4 | Pit | nan | REV inc |
| 5 | pluto | nan | nan |
+---------+---------+-------------+------------------------+
EDIT:
Earlier answer contains an error in my regex (forgot to ? the \(), couldn't quite handle "pluto", corrected now.
The moral of the story is that, the regex you need to design will be very very specialized, fragil and hardcoded. almost worth considering a df['sum'].apply(myfoo) approach just to parse df['sum'] more thoroughly.
i have a code and the prints look pretty weird. i want to fix it
*The Prints
Matching Score
0 john carry 73.684211
Matching Score
0 alex midlane 80.0
Matching Score
0 alex midlane 50.0
Matching Score
0 robert patt 53.333333
Matching Score
0 robert patt 70.588235
Matching Score
0 david baker 100.0
*I need this format
| Matching | Score |
| ------------ | -----------|
| john carry | 73.684211 |
| alex midlane | 80.0 |
| alex midlane | 50.0 |
| robert patt | 53.333333 |
| robert patt | 70.588235 |
| david baker | 100.0 |
*My Code
import numpy as np
import pandas as pd
from rapidfuzz import process, fuzz
df = pd.DataFrame({
"NameTest": ["john carry", "alex midlane", "robert patt", "david baker", np.nan, np.nan, np.nan],
"Name": ["john carrt", "john crat", "alex mid", "alex", "patt", "robert", "david baker"]
})
NameTests = [name for name in df["NameTest"] if isinstance(name, str)]
for Name in df["Name"]:
if isinstance(Name, str):
match = process.extractOne(
Name, NameTests,
scorer=fuzz.ratio,
processor=None,
score_cutoff=10)
data = {'Matching': [match[0]],
'Score': [match[1]]}
df1 = pd.DataFrame(data)
print(df1)
I have tried many ways. but got the same prints
thank you for suggestion.
You need an array or a list in order to keep all the data (I use an array) because you creating a dataframe in each loop
data = []
for Name in df["Name"]:
if isinstance(Name, str):
match = process.extractOne(
Name, NameTests,
scorer=fuzz.ratio,
processor=None,
score_cutoff=10)
print(match[0])
data.append({'Matching': match[0],
'Score': match[1]})
df1 = pd.DataFrame(data)
print(df1)
Here is the output
enter image description here
You create a new dataframe in each loop. You can store the result in a global dict and create dataframe from that dict after the loop.
data = {'Matching': [], 'Score': []}
for Name in df["Name"]:
if isinstance(Name, str):
match = process.extractOne(
Name, NameTests,
scorer=fuzz.ratio,
processor=None,
score_cutoff=10)
data['Matching'].append(match[0])
data['Score'].append(match[1])
df1 = pd.DataFrame(data)
I have a script that converts files.
import pandas as pd
df = pd.read_csv("sample1.csv")
final_df = df.reindex(['id','name','email'],axis=1)
final_df.to_csv("output.csv", index = False)
sample1.csv
|name|email|id|
|--| -- | -- |
output.csv
|id|name|email|
|--| -- | -- |
Now, if the other sample files are in the format like below, how to arrange them in the format same as output.csv
sample2.csv
|id|first name |email address|
|--| -- | -- |
|1 | Sta |sta#example.com|
|2 |Danny|dany#example.com|
|3 |Elle |elle#example.com|
sample3.csv
|id|initial name |email id|
|--| -- | -- |
|1 | Ricky|ricky#example.com|
|2 |Sham|sham#example.com|
|3 |Mark|#example.com|
sample4.csv
| id |alias|contact|
|-- | -- | -- |
| 1 | Ricky|ricky#example.com|
|2 |Sham|sham#example.com|
|3 |Mark|#example.com|
I want to convert these files and place them in the columns of output file. For example, first name, initial name, alias refers to name(all means same), and email address, email id, and contact should refer to email. The order of columns can be random in the sample files.
The basic illustration for this case is :
switch(headerfields[i])
{
case "firstname":
case "initial name":
case "alias":
name = i;
}
Any ideas to do this in Pandas?
Select the target columns, then append to the target DataFrame.
dfn = pd.DataFrame(columns=['id', 'name', 'email'])
for df in [df1, df2, df3]:
# select columns
cond_list = [
df.columns =='id',
df.columns.str.contains('name|alias', na=False),
df.columns.str.contains('email|contact', na=False)
]
cols = [df.columns[cond][0] for cond in cond_list]
print(cols)
dfn = dfn.append(pd.DataFrame(df[cols].values, columns=dfn.columns))
output:
['id', 'first name', 'email address']
['id', 'initial name', 'email id']
['id', 'alias', 'contact']
dfn:
id name email
0 1 Sta sta#example.com
1 2 Danny dany#example.com
2 3 Elle elle#example.com
0 1 Ricky ricky#example.com
1 2 Sham sham#example.com
2 3 Mark #example.com
0 1 Ricky ricky#example.com
1 2 Sham sham#example.com
2 3 Mark #example.com
Testing data:
df_str = '''
id "first name" "email address"
1 Sta sta#example.com
2 Danny dany#example.com
3 Elle elle#example.com
'''
df1 = pd.read_csv(io.StringIO(df_str.strip()), sep='\s+', index_col=False)
df_str = '''
id "initial name" "email id"
1 Ricky ricky#example.com
2 Sham sham#example.com
3 Mark #example.com
'''
df2 = pd.read_csv(io.StringIO(df_str.strip()), sep='\s+', index_col=False)
df_str = '''
id alias contact
1 Ricky ricky#example.com
2 Sham sham#example.com
3 Mark #example.com
'''
df3 = pd.read_csv(io.StringIO(df_str.strip()), sep='\s+', index_col=False)
df1['1'] = 1
df2['2'] = 2
df3['3'] = 3
df1.sort_index(axis=1, inplace=True)
df2.sort_index(axis=1, inplace=True)
df3.sort_index(axis=1, inplace=True)
Not the cleanest solution, but you could test the first 5 rows of each dataframe for certain strings/numbers and assume that's your target column.
import numpy as np
import pandas as pd
def rename_and_merge_dfs(dfs : list) -> pd.DataFrame:
new_dfs = []
for frame in dfs:
id_col = frame.head(5).select_dtypes(np.number).columns[0]
email = frame.columns[frame.head(5).replace('[^#]','',regex=True).eq('#').all()][0]
name = list(set(frame.columns) - set([id_col, email]))[0]
frame = frame.rename(columns={id_col : 'id', email : 'email', name : 'name'})
new_dfs.append(frame)
return pd.concat(new_dfs)
final = rename_and_merge_dfs([df3,df2,df1])
print(final)
id name email
1 1 Ricky ricky#example.com
2 2 Sham sham#example.com
3 3 Mark #example.com
0 1 Ricky ricky#example.com
1 2 Sham sham#example.com
2 3 Mark #example.com
0 1 Sta sta#example.com
1 2 Danny dany#example.com
2 3 Elle elle#example.com
This solved my problem.
import pandas as pd
sample1 = pd.read_csv('sample1.csv')
def mapping(df):
for column_name, column in df.transpose().iterrows():
df.rename(columns ={'first name' : 'FNAME', 'email address': 'EMAIL'}, inplace = True)
df.rename(columns ={'alias' : 'FNAME', 'contact': 'EMAIL'}, inplace = True)
df.rename(columns ={'initial name' : 'FNAME', 'emailid': 'EMAIL'}, inplace = True)
mapping(sample1)