Adding Row values as Columns in Python [duplicate] - python

This question already has answers here:
How can I pivot a dataframe?
(5 answers)
Closed 11 months ago.
I have the following table in pandas (notice how the item repeats for each warehouse)
id
Item
Warehouse
Price
Cost
1
Cake
US: California
30
20
1
Cake
US: Chicago
30
20
2
Meat
US: California
40
10
2
Meat
US: Chicago
40
10
And I need to add each warehouse as a separate column like this:
id
Item
Warehouse 1
Warehouse 2
Price
Cost
1
Cake
US: California
US: Chicago
30
20
2
Meat
US: California
US: Chicago
40
10
Data:
{'id': [1, 1, 2, 2],
'Item': ['Cake', 'Cake', 'Meat', 'Meat'],
'Warehouse': ['US: California',
'US: Chicago',
'US: California',
'US: Chicago'],
'Price': [30, 30, 40, 40],
'Cost': [20, 20, 10, 10]}

You could assign a number to each warehouse for each id using groupby + cumcount; then pivot:
out = (df.assign(col_idx=df.groupby('Item').cumcount().add(1))
.pivot(['id', 'Item', 'Price', 'Cost'], 'col_idx', 'Warehouse')
.add_prefix('Warehouse ').reset_index().rename_axis(columns=[None]))
or you could use groupby + agg(list); then construct a DataFrame with the Warehouse column and join:
out = df.groupby(['id', 'Item', 'Price', 'Cost']).agg(list).reset_index()
out = (out.drop(columns='Warehouse')
.join(pd.DataFrame(out['Warehouse'].tolist(), columns=['Warehouse 1', 'Warehouse 2'])))
Output:
id Item Price Cost Warehouse 1 Warehouse 2
0 1 Cake 30 20 US: California US: Chicago
1 2 Meat 40 10 US: California US: Chicago

Related

Sort values intra group [duplicate]

This question already has an answer here:
Pandas groupby sort each group values and order dataframe groups based on max of each group
(1 answer)
Closed 1 year ago.
Suppose I have this dataframe:
df = pd.DataFrame({
'price': [2, 13, 24, 15, 11, 44],
'category': ["shirts", "pants", "shirts", "tops", "hat", "tops"],
})
price category
0 2 shirts
1 13 pants
2 24 shirts
3 15 tops
4 11 hat
5 44 tops
I want to sort values in such a way that:
Find what is the highest price of each category.
Sort categories according to highest price (in this case, in descending order: tops, shirts, pants, hat).
Sort each category according to higher price.
The final dataframe would look like:
price category
0 44 tops
1 15 tops
2 24 shirts
3 24 shirts
4 13 pants
5 11 hat
I'm not a big fan of one-liners, so here's my solution:
# Add column with max-price for each category
df = df.merge(df.groupby('category')['price'].max().rename('max_cat_price'),
left_on='category', right_index=True)
# Sort
df.sort_values(['category','price','max_cat_price'], ascending=False)
# Drop column that has max-price for each category
df.drop('max_cat_price', axis=1, inplace=True)
print(df)
price category
5 44 tops
3 15 tops
2 24 shirts
0 2 shirts
1 13 pants
4 11 hat
You can use .groupby and .sort_values:
df.join(df.groupby("category").agg("max"), on="category", rsuffix="_r").sort_values(
["price_r", "price"], ascending=False
)
Output
price category price_r
5 44 tops 44
3 15 tops 44
2 24 shirts 24
0 2 shirts 24
1 13 pants 13
4 11 hat 11
I used the get_group in an dataframe apply to get the max price for a category
df = pd.DataFrame({
'price': [2, 13, 24, 15, 11, 44],
'category': ["shirts", "pants", "shirts", "tops", "hat", "tops"],
})
grouped=df.groupby('category')
df['price_r']=df['category'].apply(lambda row: grouped.get_group(row).price.max())
df=df.sort_values(['category','price','price_r'], ascending=False)
print(df)
output
price category price_r
5 44 tops 44
3 15 tops 44
2 24 shirts 24
0 2 shirts 24
1 13 pants 13
4 11 hat 11

Faster way to query & compute in Pandas [duplicate]

This question already has answers here:
Pandas Merging 101
(8 answers)
Closed 2 years ago.
I have two dataframes in Pandas. What I want achieve is, grab every 'Name' from DF1 and get the corresponding 'City' and 'State' present in DF2.
For example, 'Dwight' from DF1 should return corresponding values 'Miami' and 'Florida' from DF2.
DF1
Name Age Student
0 Dwight 20 Yes
1 Michael 30 No
2 Pam 55 No
. . . .
70000 Jim 27 Yes
DF1 has approx 70,000 rows with 3 columns
Second Dataframe, DF2 has approx 320,000 rows.
Name City State
0 Dwight Miami Florida
1 Michael Scranton Pennsylvania
2 Pam Austin Texas
. . . . .
325082 Jim Scranton Pennsylvania
Currently I have two functions, which return the values of 'City' and 'State' using a filter.
def read_city(id):
filt = (df2['Name'] == id)
if filt.any():
field = (df2[filt]['City'].values[0])
else:
field = ""
return field
def read_state(id):
filt = (df2['Name'] == id)
if filt.any():
field = (df2[filt]['State'].values[0])
else:
field = ""
return field
I am using the apply function to process all the values.
df['city_list'] = df['Name'].apply(read_city)
df['State_list'] = df['Name'].apply(read_state)
The result takes a long time to compute in the above way. It roughly takes me around 18 minutes to get back the df['city_list'] and df['State_list'].
Is there a faster to compute this ? Since I am completely new to pandas, I would like to know if there is a efficient way to compute this ?
I believe you can do a map:
s = df2.groupby('name')[['City','State']].agg(list)
df['city_list'] = df['Name'].map(s['City'])
df['State_list'] = df['Name'].map(s['State'])
Or a left merge after you got s:
df = df.merge(s.add_suffix('_list'), left_on='Name', right_index=True, how='left')
I think you can do something like this:
# Dataframe DF1 (dummy data)
DF1 = pd.DataFrame(columns=['Name', 'Age', 'Student'], data=[['Dwight', 20, 'Yes'], ['Michael', 30, 'No'], ['Pam', 55, 'No'], ['Jim', 27, 'Yes']])
print("DataFrame DF1")
print(DF1)
# Dataframe DF2 (dummy data)
DF2 = pd.DataFrame(columns=['Name', 'City', 'State'], data=[['Dwight', 'Miami', 'Florida'], ['Michael', 'Scranton', 'Pennsylvania'], ['Pam', 'Austin', 'Texas'], ['Jim', 'Scranton', 'Pennsylvania']])
print("DataFrame DF2")
print(DF2)
# You do a merge on 'Name' column and then, you change the name of columns 'City' and 'State'
df = pd.merge(DF1, DF2, on=['Name']).rename(columns={'City': 'city_list', 'State': 'State_list'})
print("DataFrame final")
print(df)
Output:
DataFrame DF1
Name Age Student
0 Dwight 20 Yes
1 Michael 30 No
2 Pam 55 No
3 Jim 27 Yes
DataFrame DF2
Name City State
0 Dwight Miami Florida
1 Michael Scranton Pennsylvania
2 Pam Austin Texas
3 Jim Scranton Pennsylvania
DataFrame final
Name Age Student city_list State_list
0 Dwight 20 Yes Miami Florida
1 Michael 30 No Scranton Pennsylvania
2 Pam 55 No Austin Texas
3 Jim 27 Yes Scranton Pennsylvania

Pandas - Create column with difference in values

I have the below dataset. How can create a new column that shows the difference of money for each person, for each expiry?
The column is yellow is what I want. You can see that it is the difference in money for each expiry point for the person. I highlighted the other rows in colors so it is more clear.
Thanks a lot.
Example
[]
import pandas as pd
import numpy as np
example = pd.DataFrame( data = {'Day': ['2020-08-30', '2020-08-30','2020-08-30','2020-08-30',
'2020-08-29', '2020-08-29','2020-08-29','2020-08-29'],
'Name': ['John', 'Mike', 'John', 'Mike','John', 'Mike', 'John', 'Mike'],
'Money': [100, 950, 200, 1000, 50, 50, 250, 1200],
'Expiry': ['1Y', '1Y', '2Y','2Y','1Y','1Y','2Y','2Y']})
example_0830 = example[ example['Day']=='2020-08-30' ].reset_index()
example_0829 = example[ example['Day']=='2020-08-29' ].reset_index()
example_0830['key'] = example_0830['Name'] + example_0830['Expiry']
example_0829['key'] = example_0829['Name'] + example_0829['Expiry']
example_0829 = pd.DataFrame( example_0829, columns = ['key','Money'])
example_0830 = pd.merge(example_0830, example_0829, on = 'key')
example_0830['Difference'] = example_0830['Money_x'] - example_0830['Money_y']
example_0830 = example_0830.drop(columns=['key', 'Money_y','index'])
Result:
Day Name Money_x Expiry Difference
0 2020-08-30 John 100 1Y 50
1 2020-08-30 Mike 950 1Y 900
2 2020-08-30 John 200 2Y -50
3 2020-08-30 Mike 1000 2Y -200
If the difference is just derived from the previous date, you can just define a date variable in the beginning to find today(t) and previous day (t-1) to filter out original dataframe.
You can solve it with groupby.diff
Take the dataframe
df = pd.DataFrame({
'Day': [30, 30, 30, 30, 29, 29, 28, 28],
'Name': ['John', 'Mike', 'John', 'Mike', 'John', 'Mike', 'John', 'Mike'],
'Money': [100, 950, 200, 1000, 50, 50, 250, 1200],
'Expiry': [1, 1, 2, 2, 1, 1, 2, 2]
})
print(df)
Which looks like
Day Name Money Expiry
0 30 John 100 1
1 30 Mike 950 1
2 30 John 200 2
3 30 Mike 1000 2
4 29 John 50 1
5 29 Mike 50 1
6 28 John 250 2
7 28 Mike 1200 2
And the code
# make sure we have dates in the order we want
df.sort_values('Day', ascending=False)
# groubpy and get the difference from the next row in each group
# diff(1) calculates the difference from the previous row, so -1 will point to the next
df['Difference'] = df.groupby(['Name', 'Expiry']).Money.diff(-1)
Output
Day Name Money Expiry Difference
0 30 John 100 1 50.0
1 30 Mike 950 1 900.0
2 30 John 200 2 -50.0
3 30 Mike 1000 2 -200.0
4 29 John 50 1 NaN
5 29 Mike 50 1 NaN
6 28 John 250 2 NaN
7 28 Mike 1200 2 NaN

Python DataFrame : find previous row's value before a specific value with same value in other columns

I have a datafame as follows
import pandas as pd
d = {
'Name' : ['James', 'John', 'Peter', 'Thomas', 'Jacob', 'Andrew','John', 'Peter', 'Thomas', 'Jacob', 'Peter', 'Thomas'],
'Order' : [1,1,1,1,1,1,2,2,2,2,3,3],
'Place' : ['Paris', 'London', 'Rome','Paris', 'Venice', 'Rome', 'Paris', 'Paris', 'London', 'Paris', 'Milan', 'Milan']
}
df = pd.DataFrame(d)
Name Order Place
0 James 1 Paris
1 John 1 London
2 Peter 1 Rome
3 Thomas 1 Paris
4 Jacob 1 Venice
5 Andrew 1 Rome
6 John 2 Paris
7 Peter 2 Paris
8 Thomas 2 London
9 Jacob 2 Paris
10 Peter 3 Milan
11 Thomas 3 Milan
[Finished in 0.7s]
The dataframe represents people visiting various cities, Order column defines the order of visit.
I would like find which city people visited before Paris.
Expected dataframe is as follows
Name Order Place
1 John 1 London
2 Peter 1 Rome
4 Jacob 1 Venice
Which is the pythonic way to find it ?
Using merge
s = df.loc[df.Place.eq('Paris'), ['Name', 'Order']]
m = s.assign(Order=s.Order.sub(1))
m.merge(df, on=['Name', 'Order'])
Name Order Place
0 John 1 London
1 Peter 1 Rome
2 Jacob 1 Venice

Add column and fill missing value based on other columns in Pandas

For the following input data, I need to fill missing office_numbers and create one column to distinguish if office_number is original or afterwards filled one.
Here is the example data:
df = pd.DataFrame({'id':['1010084420','1010084420','1010084420','1010084421','1010084421','1010084421','1010084425'],
'building_name': ['A', 'A', 'A', 'East Tower', 'East Tower', 'West Tower', 'T1'],
'floor': ['1', '1', '2', '10', '10', '11','11'],
'office_number':['', '','205','','','', '1101-1105'],
'company_name': ['Ariel Resources Ltd.', 'A.O. Tatneft', '', 'Agrium Inc.', 'Creo Products Inc.', 'Cott Corp.', 'Creo Products Inc.']})
print(df)
Output:
id building_name floor office_number company_name
0 1010084420 A 1 Ariel Resources Ltd.
1 1010084420 A 1 A.O. Tatneft
2 1010084420 A 2 205
3 1010084421 East Tower 10 Agrium Inc.
4 1010084421 East Tower 10 Creo Products Inc.
5 1010084421 West Tower 11 Cott Corp.
6 1010084425 T1 11 1101-1105 Creo Products Inc.
I need to fill the office_number when it's empty for the office of same id, building_name and floor, with the following rule: value of floor + F + 001, 002, 003, etc.; and create one column office_num_status, when it's not null, insert original, otherwise filled.
This is the final expected result:
id building_name floor office_num_status office_number \
0 1010084420 A 1 filled 1F001
1 1010084420 A 1 filled 1F002
2 1010084420 A 2 original 205
3 1010084421 East Tower 10 filled 10F001
4 1010084421 East Tower 10 filled 10F002
5 1010084421 West Tower 11 filled 11F001
6 1010084425 T1 11 original 1101-1105
company_name
0 Ariel Resources Ltd.
1 A.O. Tatneft
2
3 Agrium Inc.
4 Creo Products Inc.
5 Cott Corp.
6 Creo Products Inc.
I have done so far is created columns office_num_status but all values are originals:
# method 1
df['office_num_status'] = np.where(df['office_number'].isnull(), 'filled', 'original')
# method 2
df['office_num_status'] = ['filled' if x is None else 'original' for x in df['office_number']]
# method 3
df['office_num_status'] = 'filled'
df.loc[df['office_number'] is not None, 'office_num_status'] = 'original'
Could someone can help me to finish this? Thanks a lot.
Compare missing string instead missing value, add counter by GroupBy.cumcount and fill non exist values:
mask = df['office_number'] == ''
df.insert(3, 'office_num_status', np.where(mask, 'filled', 'original'))
s = df.groupby(['id','building_name','floor']).cumcount().add(1).astype(str).str.zfill(3)
df.loc[mask, 'office_number'] = df['floor'].astype(str) + 'F' + s
print (df)
id building_name floor office_num_status office_number \
0 1010084420 A 1 filled 1F001
1 1010084420 A 1 filled 1F002
2 1010084420 A 2 original 205
3 1010084421 East Tower 10 filled 10F001
4 1010084421 East Tower 10 filled 10F002
5 1010084421 West Tower 11 filled 11F001
6 1010084425 T1 11 original 1101-1105
company_name
0 Ariel Resources Ltd.
1 A.O. Tatneft
2
3 Agrium Inc.
4 Creo Products Inc.
5 Cott Corp.
6 Creo Products Inc.

Categories

Resources