Finding value of another attribute given an attribute - python

I have a CSV that has multiple lines, and I am looking to find the JobTitle of a person, given their name. The CSV is now in a DataFrame sal as such:
id employee_name job_title
1 SOME NAME SOME TITLE
I'm trying to find the JobTitle of some given persons name, but am having a hard time doing this. I am currently trying to learn pandas by doing crash courses and I know I can get a list of job titles by using sal['job_title'], but that gives me an entire list of the job titles.
How can I find the value of a specific person?

You need boolean indexing:
sal[sal.employee_name == 'name']
If need select only some column, use ix with boolean indexing:
sal.ix[sal.employee_name == 'name', 'job_title']
Sample:
sal = pd.DataFrame({'id':[1,2,3],
'employee_name':['name','name1','name2'],
'job_title':['titleA','titleB','titleC']},
columns=['id','employee_name','job_title'])
print (sal)
id employee_name job_title
0 1 name titleA
1 2 name1 titleB
2 3 name2 titleC
print (sal[sal.employee_name == 'name'])
id employee_name job_title
0 1 name titleA
print (sal.ix[sal.employee_name == 'name', 'job_title'])
0 titleA
Name: job_title, dtype: object

Related

Setting specific rows to the value found in a row if differing index

I work with a lot of CSV data for my job. I am trying to use Pandas to convert the member 'Email' to populate into the row of their spouses 'PrimaryMemberEmail' column. Here is a sample of what I mean:
import pandas as pd
user_data = {'FirstName':['John','Jane','Bob'],
'Lastname':['Snack','Snack','Tack'],
'EmployeeID':['12345','12345S','54321'],
'Email':['John#issues.com','NaN','Bob#issues.com'],
'DOB':['09/07/1988','12/25/1990','07/13/1964'],
'Role':['Employee On Plan','Spouse On Plan','Employee Off Plan'],
'PrimaryMemberEmail':['NaN','NaN','NaN'],
'PrimaryMemberEmployeeId':['NaN','12345','NaN']
}
df = pd.DataFrame(user_data)
I have thousands of rows like this. I need to only populate the 'PrimaryMemberEmail' when the user is a spouse with the 'Email' of their associated primary holders email. So in this case I would want to autopopulate the 'PrimaryMemberEmail' for Jane Snack to be that of her spouse, John Snack, which is 'John#issues.com' I cannot find a good way to do this. currently I am using:
for i in (df['EmployeeId']):
p = (p + len(df['EmployeeId']) - (len(df['EmployeeId'])-1))
EEID = df['EmployeeId'].iloc[p]
if 'S' in EEID:
df['PrimaryMemberEmail'].iloc[p] = df['Email'].iloc[p-1]
What bothers me is that this only works if my file comes in correctly, like how I showed in the example DataFrame. Also my NaN values do not work with dropna() or other methods, but that is a question for another time.
I am new to python and programming. I am trying to add value to myself in my current health career and I find this all very fascinating. Any help is appreciated.
IIUC, map the values and fillna:
df['PrimaryMemberEmail'] = (df['PrimaryMemberEmployeeId']
.map(df.set_index('EmployeeID')['PrimaryMemberEmail'])
.fillna(df['PrimaryMemberEmail'])
)
Alternatively, if you have real NaNs, (not strings), use boolean indexing:
df.loc[df['PrimaryMemberEmployeeId'].notna(),
'PrimaryMemberEmail'] = df['PrimaryMemberEmployeeId'].map(df.set_index('EmployeeID')['PrimaryMemberEmail'])
output:
FirstName Lastname EmployeeID DOB Role PrimaryMemberEmail PrimaryMemberEmployeeId
0 John Mack 12345 09/07/1988 Employee On Plan John#issues.com NaN
1 Jane Snack 12345S 12/25/1990 Spouse On Plan John#issues.com 12345
2 Bob Tack 54321 07/13/1964 Employee Off Plan Bob#issues.com NaN

In Python, If there is a duplicate, use the date column to choose the what duplicate to use

I have code that runs 16 test cases against a CSV, checking for anomalies from poor data entry. A new column, 'Test case failed,' is created. A number corresponding to which test it failed is added to this column when a row fails a test. These failed rows are separated from the passed rows; then, they are sent back to be corrected before they are uploaded into a database.
There are duplicates in my data, and I would like to add code to check for duplicates, then decide what field to use based on the date, selecting the most updated fields.
Here is my data with two duplicate IDs, with the first row having the most recent Address while the second row has the most recent name.
ID
MnLast
MnFist
MnDead?
MnInactive?
SpLast
SpFirst
SPInactive?
SpDead
Addee
Sal
Address
NameChanged
AddrChange
123
Doe
John
No
No
Doe
Jane
No
No
Mr. John Doe
Mr. John
123 place
05/01/2022
11/22/2022
123
Doe
Dan
No
No
Doe
Jane
No
No
Mr. John Doe
Mr. John
789 road
11/01/2022
05/06/2022
Here is a snippet of my code showing the 5th testcase, which checks for the following: Record has Name information, Spouse has name information, no one is marked deceased, but Addressee or salutation doesn't have "&" or "AND." Addressee or salutation needs to be corrected; this record is married.
import pandas as pd
import numpy as np
data = pd.read_csv("C:/Users/file.csv", encoding='latin-1' )
# Create array to store which test number the row failed
data['Test Case Failed']= ''
data = data.replace(np.nan,'',regex=True)
data.insert(0, 'ID', range(0, len(data)))
# There are several test cases, but they function primarily the same
# Testcase 1
# Testcase 2
# Testcase 3
# Testcase 4
# Testcase 5 - comparing strings in columns
df = data[((data['FirstName']!='') & (data['LastName']!='')) &
((data['SRFirstName']!='') & (data['SRLastName']!='') &
(data['SRDeceased'].str.contains('Yes')==False) & (data['Deceased'].str.contains('Yes')==False)
)]
df1 = df[df['PrimAddText'].str.contains("AND|&")==False]
data_5 = df1[df1['PrimSalText'].str.contains("AND|&")==False]
ids = data_5.index.tolist()
# Assign 5 for each failed
for i in ids:
data.at[i,'Test Case Failed']+=', 5'
# Failed if column 'Test Case Failed' is not empty, Passed if empty
failed = data[(data['Test Case Failed'] != '')]
passed = data[(data['Test Case Failed'] == '')]
failed['Test Case Failed'] =failed['Test Case Failed'].str[1:]
failed = failed[(failed['Test Case Failed'] != '')]
# Clean up
del failed["ID"]
del passed["ID"]
failed['Test Case Failed'].value_counts()
# Print to console
print("There was a total of",data.shape[0], "rows.", "There was" ,data.shape[0] - failed.shape[0], "rows passed and" ,failed.shape[0], "rows failed at least one test case")
# output two files
failed.to_csv("C:/Users/Failed.csv", index = False)
passed.to_csv("C:/Users/Passed.csv", index = False)
What is the best approach to check for duplicates, choose the most updated fields, drop the outdated fields/row, and perform my test?
First, try to set a mapping that associates update date columns to their corresponding value columns.
date2val = {"AddrChange": ["Address"], "NameChanged": ["MnFist", "MnLast"], ...}
Then, transform date columns into datetime format to be able to compare them (using argmax later).
for key in date2val.keys():
failed[key] = pd.to_datetime(failed[key])
Then, group by ID the duplicates (since ID is the value that decides whether it is a duplicate), and for each date column get the maximum value in the group (which refers to the most recent update) and retrieve the columns to update from the initial mapping. I'll update the last row and set it as the final updated result (by putting it in corrected list).
corrected = list()
for _, grp in failed.groupby("ID"):
for key in date2val.keys():
recent = grp[key].argmax()
for col in date2val[key]:
grp.iloc[-1][col] = grp.iloc[recent][col]
corrected.append(grp.iloc[-1])
corrected = pd.DataFrame(corrected)
Preparing data:
import pandas as pd
c = 'ID MnLast MnFist MnDead? MnInactive? SpLast SpFirst SPInactive? SpDead Addee Sal Address NameChanged AddrChange'.split()
data1 = '123 Doe John No No Doe Jane No No Mr.JohnDoe Mr.John 123place 05/01/2022 11/22/2022'.split()
data2 = '123 Doe Dan No No Doe Jane No No Mr.JohnDoe Mr.John 789road 11/01/2022 05/06/2022'.split()
data3 = '8888 Brown Peter No No Brwon Peter No No Mr.PeterBrown M.Peter 666Avenue 01/01/2011 01/01/2011'.split()
df = pd.DataFrame(columns = c, data = [data1, data2, data3])
df.AddrChange.astype('datetime64')
df.NameChanged.astype('datetime64')
df
DataFrame is like the example:
Then you pick a piece of the dataframe avoiding changes in original. Adjacent rows have the same ID and the first one has the apropriate name:
df1 = df[['ID', 'MnFist', 'NameChanged']].sort_values(by=['ID', 'NameChanged'], ascending = False)
df1
Then you build a dictionary putting key as df.ID and the appropriate name for its value. You intend to build all the column MnFist:
d = {}
for id in set(df.ID.values):
df_mask = df1.ID == id # filter only rows with same id
filtered_df = df1[df_mask]
if len(filtered_df) <= 1:
d[id] = filtered_df.iat[0, 1] # id has only one row, so no changes
continue
for name in filtered_df.MnFist:
if name in ['unknown', '', ' '] or name is None: # name discards
continue
else:
d[id] = name # found a servible name
if id not in d.keys():
d[id] = filtered_df.iat[0, 1] # no servible name, so picked the first
print(d)
The partial output of the dictionary is:
{'8888': 'Peter', '123': 'Dan'}
Then you build all the column:
df.MnFist = [d[id] for id in df.ID]
df
The partial output is:
Then the same procedure to the other column:
df1 = df[['ID', 'Address', 'AddrChange']].sort_values(by=['ID', 'AddrChange'], ascending = False)
df1
d = { id: df1.loc[df1.ID == id, 'Address'].values[0] for id in set(df.ID.values) }
d
df.Address = [d[id] for id in df.ID]
df
The final output is:
Edited after author comented possibility of unknow inservible data.
Let me restate what I understood from the question:
You have a dataset on which you are doing several sanity checks. (Looks like you already have everything in place for this step)
In next step you are finding duplicates row with different columns updated at different dates. (I assume that you already have this)
Now, you are looking for a new dataset that has non-duplicated rows with updated fields using the latest date entries.
First, define different dates and their related columns in a form of dictionary:
date_to_cols = {"AddrChange": "Address", "NameChanged": ["MnLast", "MnFirst"]}
Next, apply group by using "ID" and then get the index for maximum value of different dates. Once we have the index, we can pull the related fields for that date from the data.
data[list(date_to_cols.keys())] =data[list(date_to_cols.keys())].astype('datetime64')
latest_data = df.groupby('ID')[list(date_to_cols.keys())].idxmax().reset_index()
for date_field, cols_to_update in date_to_cols.items():
latest_data[cols_to_update] = latest_data[date_field].apply(lambda x: data.iloc[x][cols_to_update])
latest_data[date_field] = latest_data[date_field].apply(lambda x: data.iloc[x][date_field])
Next, you can merge these latest_data with the original data (after removing old columns):
cols_to_drop = list(latest_data.columns)
cols_to_drop.remove("ID")
data.drop(columns= cols_to_drop, inplace=True)
latest_data_all_fields = data.merge(latest_data, on="ID", how="left")
latest_data_all_fields.drop_duplicates(inplace=True)

Pandas unable to assign value to cell

I am trying to assign a value to the Team Name value in the df, I was able to retrieve the value at the cell but when i tried to assign a value to it, it wont reflect the change
Unnamed: 0 Name Email Roll Number Phone Number Discord Id Team Name
0 0 Name email#google.edu 1025 9821090000 discordid#4431 NaN
register[register['Discord Id'] == 'discordid#4431']['Team Name']
gives the output
0 NaN
Name: Team Name, dtype: float64
register[register['Discord Id'] == 'discordid#4431']['Team Name'] = 'Team1' does not reflect any changes
in the dataframe
can anybody help?
Try
mask = register['Discord Id'] == 'discordid#4431'
register.loc[mask, 'Team Name'] = 'Team1'
Is this what you're trying to do?
df['Team Name'] = df['Discord ID']

Getting an error when checking if values in a list match a column PANDAS

I'm just wondering how one might overcome the below error.
AttributeError: 'list' object has no attribute 'str'
What I am trying to do is create a new column "PrivilegedAccess" and in this column I want to write "True" if any of the names in the first_names column match the ones outlined in the "Search_for_These_values" list and "False" if they don't
Code
## Create list of Privileged accounts
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values) # joining list for comparision
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF.columns=[['first_name']].str.contains(pattern)
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['PrivilegedAccess'].map({True: 'True', False: 'False'})
SAMPLE DATA:
uid last_name first_name language role email_address department
0 121 Chad Diagnostics English Team Lead Michael.chad#gmail.com Data Scientist
1 253 Montegu Paulo Spanish CIO Paulo.Montegu#gmail.com Marketing
2 545 Mitchel Susan English Team Lead Susan.Mitchel#gmail.com Data Scientist
3 555 Vuvko Matia Polish Marketing Lead Matia.Vuvko#gmail.com Marketing
4 568 Sisk Ivan English Supply Chain Lead Ivan.Sisk#gmail.com Supply Chain
5 475 Andrea Patrice Spanish Sales Graduate Patrice.Andrea#gmail.com Sales
6 365 Akkinapalli Cherifa French Supply Chain Assistance Cherifa.Akkinapalli#gmail.com Supply Chain
Note that the dtype of the first_name column is "object" and the dataframe is multi index (not sure how to change from multi index)
Many thanks
It seems you need select one column for str.contains and then use map or convert boolean to strings:
Search_for_These_values = ['Privileged','Diagnostics','SYS','service account'] #creating list
pattern = '|'.join(Search_for_These_values)
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
print (PrivilegedAccounts_DF.loc[0, 'first_name'])
Privileged 111
print (type(PrivilegedAccounts_DF.loc[0, 'first_name']))
<class 'str'>
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
first_name PrivilegedAccess
0 Privileged 111 True
1 aaa SYS True
2 sss False
EDIT:
There is problem one level MultiIndex, need:
PrivilegedAccounts_DF = pd.DataFrame({'first_name':['Privileged 111',
'aaa SYS',
'sss']})
#simulate problem
PrivilegedAccounts_DF.columns = [PrivilegedAccounts_DF.columns.tolist()]
print (PrivilegedAccounts_DF)
first_name
0 Privileged 111
1 aaa SYS
2 sss
#check columns
print (PrivilegedAccounts_DF.columns)
MultiIndex([('first_name',)],
)
Solution is join values, e.g. by empty string:
PrivilegedAccounts_DF.columns = PrivilegedAccounts_DF.columns.map(''.join)
So now columns names are correct:
print (PrivilegedAccounts_DF.columns)
Index(['first_name'], dtype='object')
PrivilegedAccounts_DF['PrivilegedAccess'] = PrivilegedAccounts_DF['first_name'].str.contains(pattern).astype(str)
print (PrivilegedAccounts_DF)
There might be a more elegant solution, but this should work (without using patterns):
PrivilegedAccounts_DF.loc[PrivilegedAccounts_DF['first_name'].isin(Search_for_These_values), "PrivilegedAccess"]=True
PrivilegedAccounts_DF.loc[~PrivilegedAccounts_DF['first_name'].isin(Search_for_These_values), "PrivilegedAccess"]=False

How to select the specific datas from the Dataframe after being used value_couts()?

I used python to read a file which contains the baby's names, genders and birth-years. Now I want to find out the names which are used both by boys and girls. I used value_counts()to get the appearance times of each name, but now I don't know how to extract the names from all the names.
Here is my codes:
def names_both(year):
names = []
path = 'babynames/yob%d.txt' % year
columns = ['name', 'sex', 'birth']
frame = pd.read_csv(path, names=columns)
frame = frame['name'].value_counts()
print(frame)
"""if len(names) != 0:
print(names)
else:
print('None')"""
The frame now is like this:
Lou 2
Willie 2
Erie 2
Cora 2
..
Perry 1
Coy 1
Adolphus 1
Ula 1
Emily 1
Name: name, Length: 1889, dtype: int64
Here is the csv:
Anna,F,2604
Emma,F,2003
Elizabeth,F,1939
Minnie,F,1746
Margaret,F,1578
Ida,F,1472
Alice,F,1414
Bertha,F,1320
Sarah,F,1288
Annie,F,1258
Clara,F,1226
Ella,F,1156
Florence,F,1063
...
Thanks for helping!
Here we are for counting the number of names given both to girls and boys:
common_girl_and_boys_names = (
# work name by name
frame.groupby('name')
# count the number of sex given for the name and keep the one given to both sex, this boolean will be put in a column call 0
.apply(lambda x: len(x['sex'].unique()) == 2)
# the name are now in the index, reset it in order to get the names
.reset_index()
# keep only names with the column 0 with True value
.loc[lambda x: x[0], 'name']
)
final_df = (
# keep only the names common to boys and girls (the series build before)
frame.loc[frame['name'].isin(common_girl_and_boys_names), :]
# sex is now useless
.drop(['sex'], axis='columns')
# work name by name and sum the number of birth
.groupby('name')
.sum()
)
You can put those lines after the read_csv function. I hope it is want you want.

Categories

Resources