Python Filling Forward by Section - python

I have a dataframe that looks like this:
group_a group_b group_c group_d
0 maintenance 65 green steve
1 maintenance blue Sally
2 maintenance pink Jay
3 helpdesk green Ian
4 hr 32 green Tyler
What I want to return, is a dataframe that looks like this:
group_a group_b group_c group_d
0 maintenance 65 green steve
1 maintenance 65 blue Sally
2 maintenance 65 pink Jay
3 helpdesk green Ian
4 hr 32 green Tyler
I want to be able to fill forward, but I want to do it by group_a.
Is there a way to do that?

Replace empty string to missing values and then forward filling values per groups:
df['group_b'] = df['group_b'].replace('', np.nan)
df['group_b'] = df.groupby('group_a')['group_b'].ffill()

Mask df.group_a, using the loc accessor. replace the empty space with NaN then ffill the mask
df.loc[df.group_a.eq('maintenance'),'group_b']=df.loc[df.group_a.eq('maintenance'),'group_b'].replace('', np.nan).ffill()
group_a group_b group_c group_d
0 maintenance 65 green steve
1 maintenance 65 blue Sally
2 maintenance 65 pink Jay
3 helpdesk green Ian
4 hr 32 green Tyler

The Pandas function fillna() can take a Series mapping each index to the value that should be used to fill the missing value.
So we need to gather a Series mapping :
df = df.set_index("group_a") # in case it wasn't already the index
df = df.replace("", np.nan) # in case your missing values are empty strings instead of actual NaNs
mapping = df["group_b"].dropna().drop_duplicates()
Now we can:
df["group_b"].fillna(mapping, inplace=True)
df

Related

Combining three datasets removing duplicates

I've three datasets:
dataset 1
Customer1 Customer2 Exposures + other columns
Nick McKenzie Christopher Mill 23450
Nick McKenzie Stephen Green 23450
Johnny Craston Mary Shane 12
Johnny Craston Stephen Green 12
Molly John Casey Step 1000021
dataset2 (unique Customers: Customer 1 + Customer 2)
Customer Age
Nick McKenzie 53
Johnny Craston 75
Molly John 34
Christopher Mill 63
Stephen Green 65
Mary Shane 54
Casey Step 34
Mick Sale
dataset 3
Customer1 Customer2 Exposures + other columns
Mick Sale Johnny Craston
Mick Sale Stephen Green
Exposures refers to Customer 1 only.
There are other columns omitted for brevity. Dataset 2 is built by getting unique customer 1 and unique customer 2: no duplicates are in that dataset. Dataset 3 has the same column of dataset 1.
I'd like to add the information from dataset 1 into dataset 2 to have
Final dataset
Customer Age Exposures + other columns
Nick McKenzie 53 23450
Johnny Craston 75 12
Molly John 34 1000021
Christopher Mill 63
Stephen Green 65
Mary Shane 54
Casey Step 34
Mick Sale
The final dataset should have all Customer1 and Customer 2 from both dataset 1 and dataset 3, with no duplicates.
I have tried to combine them as follows
result = pd.concat([df2,df1,df3], axis=1)
but the result is not that one I'd expect.
Something wrong is in my way of concatenating the datasets and I'd appreciate it if you can let me know what is wrong.
After concatenating the dataframe df1 and df2 (assuming they have same columns), we can remove the duplicates using df1.drop_duplicates(subset=['customer1']) and then we can join with df2 like this
df1.set_index('Customer1').join(df2.set_index('Customer'))
In case df1 and df2 has different columns based on the primary key we can join using the above command and then again join with the age table.
This would give the result. You can concatenate dataset 1 and datatset 3 because they have same columns. And then run this operation to get the desired result. I am joining specifying the respective keys.
Note: Though not related to the question but for the concatenation one can use this code pd.concat([df1, df3],ignore_index=True) (Here we are ignoring the index column)

Python Pandas concatenate every 2nd row to previous row

I have a Pandas dataframe similar to this one:
age name sex
0 30 jon male
1 blue php null
2 18 jane female
3 orange c++ null
and I am trying to concatenate every second row to the previous one adding extra columns:
age name sex colour language other
0 30 jon male blue php null
1 18 jane female orange c++ null
I tried shift() but was duplicating every row.
How can this be done?
You can create a new dataframe by slicing the dataframe using iloc with a step of 2:
cols = ['age', 'name', 'sex']
new_cols = ['colour', 'language', 'other']
d = dict()
for col, ncol in zip(cols, new_cols):
d[col] = df[col].iloc[::2].values
d[ncol] = df[col].iloc[1::2].values
pd.DataFrame(d)
Result:
age colour name language sex other
0 30 blue jon PHP male NaN
1 18 orange jane c++ female NaN
TRY:
df = pd.concat([df.iloc[::2].reset_index(drop=True), pd.DataFrame(
df.iloc[1::2].values, columns=['colour', 'language', 'other'])], 1)
OUTPUT:
age name sex colour language other
0 30 jon male blue php NaN
1 18 jane female orange c++ NaN
Reshape the values and create a new dataframe
pd.DataFrame(df.values.reshape(-1, df.shape[1] * 2),
columns=['age', 'name', 'sex', 'colour', 'language', 'other'])
age name sex colour language other
0 30 jon male blue php NaN
1 18 jane female orange c++ NaN

Join Pandas DataFrames matching by string and substring

i want to merge two dataframes by partial string match.
I have two data frames to combine. First df1 consists of 130.000 rows like this:
id text xc1 xc2
1 adidas men shoes 52465 220
2 vakko men suits 49220 224
3 burberry men shirt 78248 289
4 prada women shoes 45780 789
5 lcwaikiki men sunglasses 34788 745
and second df2 consists of 8000 rows like this:
id keyword abc1 abc2
1 men shoes 1000 11
2 men suits 2000 12
3 men shirt 3000 13
4 women socks 4000 14
5 men sunglasses 5000 15
After matching between keyword and text, outputshould look like this:
id text xc1 xc2 keyword abc1 abc2
1 adidas men shoes 52465 220 men shoes 1000 11
2 vakko men suits 49220 224 men suits 2000 12
3 burberry men shirt 78248 289 men shirt 3000 13
4 lcwaikiki men sunglasses 34788 745 men sunglasses 5000 15
Let's approach by cross join the 2 dataframes and then filter by matching string with substring, as follows:
df3 = df1.merge(df2, how='cross') # for Pandas version >= 1.2.0 (released in Dec 2020)
import re
mask = df3.apply(lambda x: (re.search(rf"\b{x['keyword']}\b", str(x['text']))) != None, axis=1)
df_out = df3.loc[mask]
If your Pandas version is older than 1.2.0 (released in Dec 2020) and does not support merge with how='cross', you can replace the merge statement with:
# For Pandas version < 1.2.0
df3 = df1.assign(key=1).merge(df2.assign(key=1), on='key').drop('key', axis=1)
After the cross join, we created a boolean mask to filter for the cases that keyword is found within text by using re.search within .apply().
We have to use re.search instead of simple Python substring test like stringA in stringB found in most of the similar answers in StackOverflow. Such kind of test will fail with false match of 'men suits' in keyword with 'women suits' in text since it returns True for test of 'men suits' in 'women suits'.
We use regex with a pair of word boundary \b meta-characters around the keyword (regex pattern: rf"\b{x['keyword']}\b") to ensure matching only for whole word match for text in df1, i.e. men suits in df2 would not match with women suits in df1 since the word women does not have a word boundary between the letters wo and men.
Result:
print(df_out)
id_x text xc1 xc2 id_y keyword abc1 abc2
0 1 adidas men shoes 52465 220 1 men shoes 1000 11
6 2 vakko men suits 49220 224 2 men suits 2000 12
12 3 burberry men shirt 78248 289 3 men shirt 3000 13
24 5 lcwaikiki men sunglasses 34788 745 5 men sunglasses 5000 15
Here, columns id_x and id_y are the original id column in df1 and df2 respectively. As seen from the comment, these are just row numbers of the dataframes that you may not care about. We can then remove these 2 columns and reset index to clean up the layout:
df_out = df_out.drop(['id_x', 'id_y'], axis=1).reset_index(drop=True)
Final outcome
print(df_out)
text xc1 xc2 keyword abc1 abc2
0 adidas men shoes 52465 220 men shoes 1000 11
1 vakko men suits 49220 224 men suits 2000 12
2 burberry men shirt 78248 289 men shirt 3000 13
3 lcwaikiki men sunglasses 34788 745 men sunglasses 5000 15
Let's start by ordering the keywords longest-first, so that "women suits" matches "before "men suits"
lkeys = df2.keyword.reindex(df2.keyword.str.len().sort_values(ascending=False).index)
Now define a matching function; each text value from df1 will be passed as s to find a matching keyword:
def is_match(arr, s):
for a in arr:
if a in s:
return a
return None
Now we can extract the keyword from each text in df1, and add it to a new column:
df1['keyword'] = df1['text'].apply(lambda x: is_match(lkeys, x))
We now have everything we need for a standard merge:
pd.merge(df1, df2, on='keyword')

Python - Performing Max Function on Multiple Groupby

I have a data frame below that shows the price of wood and steel from two different suppliers.
I would like to add a column that shows the highest price for the opposite item (i.e. if line is wood, it would pull steel) from the same supplier.
For example, the "Steel" row for "Tom" would show his highest wood price which is 42.
The code I have so far simply returns the highest price for the original item (i.e. not the opposite, so for Tom's steel row returns 24 but I would have wanted it to return 42).
I think this is an issue with pulling the max value for a multi-group. I have tried a number of different ways but just cannot seem to get it.
Any thoughts would be greatly appreciated.
import pandas as pd
import numpy as np
data = {'Supplier':['Tom', 'Tom', 'Tom', 'Bill','Bill','Bill'],'Item':['Wood','Wood','Steel','Steel','Steel','Wood'],'Price':[42,33,24,16,12,18]}
df = pd.DataFrame(data)
df['Opp_Item'] = np.where(df['Item']=="Wood", "Steel", "Wood")
df['Opp_Item_Max'] = df.groupby(['Supplier','Opp_Item'])['Price'].transform(max)
print(df)
Supplier Item Price Opp_Item Opp_Item_Max
0 Tom Wood 42 Steel 42
1 Tom Wood 33 Steel 42
2 Tom Steel 24 Wood 24
3 Bill Steel 16 Wood 16
4 Bill Steel 12 Wood 16
5 Bill Wood 18 Steel 18
If you can find the per supplier+item maximum, then you can just swap the values and assign them back through a join:
v = df.groupby(['Supplier', 'Item']).Price.max().unstack(-1)
# This reversal operation works under the assumption that
# there are only two items and that they are opposites of each other.
v[:] = v.values[:, ::-1]
df = (df.set_index(['Supplier', 'Item'])
.join(v.stack().to_frame('Opp_Item_Max'), how='left')
.reset_index())
print(df)
Supplier Item Price Opp_Item_Max
0 Bill Steel 16 18
1 Bill Steel 12 18
2 Bill Wood 18 16
3 Tom Steel 24 42
4 Tom Wood 42 24
5 Tom Wood 33 24
Note: Ordering of your data will not be preserved after the join.
You could map to the opposite values before a groupby, and then merge this back to the original DataFrame.
d = {'Steel': 'Wood', 'Wood': 'Steel'}
df.merge(df.assign(Item = df.Item.map(d))
.groupby(['Supplier', 'Item'], as_index=False).max(),
on=['Supplier', 'Item'],
how='left',
suffixes=['', '_Opp_Item'])
Supplier Item Price Price_Opp_Item
0 Tom Wood 42 24
1 Tom Wood 33 24
2 Tom Steel 24 42
3 Bill Steel 16 18
4 Bill Steel 12 18
5 Bill Wood 18 16

Checking unique value for a variable in a different column

I currently have a dataframe which looks like this:
Owner Vehicle_Color
0 James Red
1 Peter Green
2 James Blue
3 Sally Blue
4 Steven Red
5 James Blue
6 James Red
7 Peter Blue
And I am trying to verify whether one Owner has one or multiple vehicle colors assigned to the person. Keeping in mind that my dataframe has more than a million number of different entries for owners (which can be duplicate), what would be the best solution?
One way may be to use groupby and nunique:
df.groupby('Owner')['Vehicle_Color'].nunique()
Results:
Owner
James 2
Peter 2
Sally 1
Steven 1
Name: Vehicle_Color, dtype: int64

Categories

Resources