Getting cell value by row name and column name from dataframe - python

Let's say I have the following data frame
name age favorite_color grade
0 Willard Morris 20 blue 88
1 Al Jennings 19 blue 92
2 Omar Mullins 22 yellow 95
3 Spencer McDaniel 21 green 70
And I'm trying to get the grade for Omar which is "95"
it can be easily obtained using
ddf = df.loc[[2], ['grade']]
print(ddf)
However, I want to use his name "Omar" instead of using the raw index "2".
Is it possible?
I tried the following syntax but it didn't work
ddf = df.loc[['Omar Mullins'], ['grade']]

Try this:
ddf = df[df['name'] == 'Omar Mullins']['grade']
to output the grade values.
Instead:
ddf = df[df['name'] == 'Omar Mullins']
will output the full row.

Related

How to leave certain values (which have a comma in them) intact when separating list-values in strings in pandas?

From the dataframe, I create a new dataframe, in which the values from the "Select activity" column contain lists, which I will split and transform into new rows. But there is a value: "Nothing, just walking", which I need to leave unchanged. Tell me, please, how can I do this?
The original dataframe looks like this:
Name Age Select activity Profession
0 Ann 25 Cycling, Running Saleswoman
1 Mark 30 Nothing, just walking Manager
2 John 41 Cycling, Running, Swimming Accountant
My code looks like this:
df_new = df.loc[:, ['Name', 'Age']]
df_new['Activity'] = df['Select activity'].str.split(', ')
df_new = df_new.explode('Activity').reset_index(drop=True)
I get this result:
Name Age Activity
0 Ann 25 Cycling
1 Ann 25 Running
2 Mark 30 Nothing
3 Mark 30 just walking
4 John 41 Cycling
5 John 41 Running
6 John 41 Swimming
In order for the value "Nothing, just walking" not to be divided by 2 values, I added the following line:
if df['Select activity'].isin(['Nothing, just walking']) is False:
But it throws an error.
then let's look ahead after comma to guarantee a Capital letter, and only then split. So instead of , we have , (?=[A-Z])
df_new = df.loc[:, ["Name", "Age"]]
df_new["Activity"] = df["Select activity"].str.split(", (?=[A-Z])")
df_new = df_new.explode("Activity", ignore_index=True)
i only changed the splitter, and ignore_index=True to explode instead of resetting afterwards (also the single quotes..)
to get
>>> df_new
Name Age Activity
0 Ann 25 Cycling
1 Ann 25 Running
2 Mark 30 Nothing, just walking
3 John 41 Cycling
4 John 41 Running
5 John 41 Swimming
one line as usual
df_new = (df.loc[:, ["Name", "Age"]]
.assign(Activity=df["Select activity"].str.split(", (?=[A-Z])"))
.explode("Activity", ignore_index=True))

Python Pandas concatenate every 2nd row to previous row

I have a Pandas dataframe similar to this one:
age name sex
0 30 jon male
1 blue php null
2 18 jane female
3 orange c++ null
and I am trying to concatenate every second row to the previous one adding extra columns:
age name sex colour language other
0 30 jon male blue php null
1 18 jane female orange c++ null
I tried shift() but was duplicating every row.
How can this be done?
You can create a new dataframe by slicing the dataframe using iloc with a step of 2:
cols = ['age', 'name', 'sex']
new_cols = ['colour', 'language', 'other']
d = dict()
for col, ncol in zip(cols, new_cols):
d[col] = df[col].iloc[::2].values
d[ncol] = df[col].iloc[1::2].values
pd.DataFrame(d)
Result:
age colour name language sex other
0 30 blue jon PHP male NaN
1 18 orange jane c++ female NaN
TRY:
df = pd.concat([df.iloc[::2].reset_index(drop=True), pd.DataFrame(
df.iloc[1::2].values, columns=['colour', 'language', 'other'])], 1)
OUTPUT:
age name sex colour language other
0 30 jon male blue php NaN
1 18 jane female orange c++ NaN
Reshape the values and create a new dataframe
pd.DataFrame(df.values.reshape(-1, df.shape[1] * 2),
columns=['age', 'name', 'sex', 'colour', 'language', 'other'])
age name sex colour language other
0 30 jon male blue php NaN
1 18 jane female orange c++ NaN

Choose higher value based off column value between two dataframes

question to choose value based on two df.
>>> df[['age','name']]
age name
0 44 Anna
1 22 Bob
2 33 Cindy
3 44 Danis
4 55 Cindy
5 66 Danis
6 11 Anna
7 43 Bob
8 12 Cindy
9 19 Danis
10 11 Anna
11 32 Anna
12 55 Anna
13 33 Anna
14 32 Anna
>>> df2[['age','name']]
age name
5 66 Danis
4 55 Cindy
0 44 Anna
7 43 Bob
expected result is all rows that value 'age' is higher than df['age'] based on column 'name.
expected result
age name
12 55 Anna
Per comments, use merge and filter dataframe:
df.merge(df2, on='name', suffixes={'','_y'}).query('age > age_y')[['name','age']]
Output:
name age
4 Anna 55
IIUC, you can use this to find the max age of all names:
pd.concat([df,df2]).groupby('name')['age'].max()
Output:
name
Anna 55
Bob 43
Cindy 55
Danis 66
Name: age, dtype: int64
Try this:
index = df[df['age'] > age].index
df.loc[index]
There are a few edge cases you don't mention how you would like to resolve, but generally what you want to do is iterate down the df and compare ages and use the larger. You could do so in the following manner:
df3 = pd.DataFrame(columns = ['age', 'name'])
for x in len(df):
if df['age'][x] > df2['age'][x]:
df3['age'][x] = df['age'][x]
df3['name'][x] = df['name'][x]
else:
df3['age'][x] = df2['age'][x]
df3['name'][x] = df2['name'][x]
Although you will need to modify this to reflect how you want to resolve names that are only in one list, or if the lists are of different sizes.
One solution comes to my mind is merge and drop
df.merge(df2, on='name', suffixes=('', '_y')).query('age.gt(age_y)', engine='python')[['age','name']]
Out[175]:
age name
4 55 Anna

Replacing values from one dataframe to another

I'm trying to fix discrepancies in a column from one df to a column in another.
The tables are not sorted as well.
How can i do this using python. Example:
df1
Age Name
40 Sid Jones
50 Alex, Bot
32 Tony Jar
65 Fred, Smith
24 Brad, Mans
df2
Age Name
24 Brad Mans
32 Tony Jar
40 Sid Jones
65 Fred Smith
50 Alex Bot
I need to replace the values in df2 to match those in df1 as you can see in my example the discrepancies are commas in the names.
Expected outcome for df2:
Age Name
24 Brad, Mans
32 Tony Jar
40 Sid Jones
65 Fred, Smith
50 Alex, Bot
The values in df2 should be changed to match the df1s values.
Create a column in df1 with commas removed from the Name column
df1['Name_nocomma'] = df1.Name.str.replace(',', '')
merge df1 to df2 using Name_nocomma & Name to get the corrected Name create a new version of df2
df2_out = df2.merge(df1, left_on='Name', right_on='Name_nocomma', how='left')[['Age_x', 'Name_x', 'Name_y']]
use combine_first to coalesce Name_y & Name_x into a new column Name
df2_out['Name'] = df2_out.Name_y.combine_first(df2_out.Name_x)
drop / rename the intermediate columns
del df1['Name_nocomma']
del df2_out['Name_x']
del df2_out['Namy_y']
df2_out.rename({'Age_x': 'Age'}, axis=1, inplace=True)
df2_out
#outputs:
Age Name
0 24 Brad Mans
1 32 Tony Jar
2 40 Sid Jones
3 65 Fred Smith
4 50 Alex Bot
you need sort and append
df1.sort(by=['Age'], inplace = True)
df2.sort(by=['Age'], inplace = True)
result_df = df1.append(df2)

Filter pandas dataframe based on a column: keep all rows if a value is that column

So I have a dataframe like the following:
Name Age City
A 21 NY
A 20 DC
A 35 OR
B 18 DC
B 19 PA
I need to keep all the rows for every Name and Age pair where a specific value is among those associated with column City. For example if my target city is NY, then my desired output would be:
Name Age City
A 21 NY
A 20 DC
A 35 OR
Edit1: I am not necessarily looking for a single value. There might be cases where there are multiple cities that I am looking for. For example: NY and DC at the same time.
Edit2: I have tried the followings which does not return correct output (daah):
df = df[df['City'] == 'NY']
and
df = df[df['City'].isin('NY')]
You can create function - first test City for equal and get all unique names for again filtering by isin:
def get_df_by_val(df, val):
return df[df['Name'].isin(df.loc[df['City'].eq(val), 'Name'].unique())]
print (get_df_by_val(df, 'NY'))
Name Age City
0 A 21 NY
1 A 20 DC
2 A 35 OR
print (get_df_by_val(df, 'PA'))
Name Age City
3 B 18 DC
4 B 19 PA
print (get_df_by_val(df, 'OR'))
Name Age City
0 A 21 NY
1 A 20 DC
2 A 35 OR
EDIT:
If need check multiple values per groups use GroupBy.transform with compare sets with issubset:
vals = ['NY', 'DC']
df1 = df[df.groupby('Name')['City'].transform(lambda x: set(vals).issubset(x))]
print (df1)
Name Age City
0 A 21 NY
1 A 20 DC
2 A 35 OR

Categories

Resources