Calculation involving two DataFrames - python

I have two DataFrame
df1:
mat inv
0 100 23
1 101 35
2 102 110
df2:
mat sale
0 100 45
1 101 100
2 102 90
I merged the DataFrame in df:
mat inv sale
0 100 23 45
1 101 35 100
2 102 110 90
so I could create another column days:
df['days'] = df.inv / df.sale * 30
then I delete the column sale, and get this as result:
df:
mat inv days
0 100 23 15
1 101 35 10
2 102 110 36
Can I create the dayscolumn directly in df1 without first merging the DataFrame? since I don't need the column of df2, just the value to do the operation of days, and I don't really want to merge them to delete it in the end.

You can create the new column directly if you make sure the mat columns align properly:
df1 = df1.set_index('mat')
df2 = df2.set_index('mat')
df2['days'] = df1.inv.div(df2.sale).mul(30)
sale days
mat
100 4 15.33
101 100 10.50
102 90 36.67

you can also do it this way:
In [181]: df1['days'] = (df1.inv / df1['mat'].map(df2.set_index('mat')['sale']) * 30).astype(int)
In [182]: df1
Out[182]:
mat inv days
0 100 23 15
1 101 35 10
2 102 110 36

surely df1['days'] = df1['inv'] / df2['sale'] * 30 works?

Related

Pandas conditional lookup based on columns from a different dataframe

I have searched but found no answers for my problem. My first dataframe looks like:
df1
Item Value
1 23
2 3
3 45
4 65
5 17
6 6
7 18
… …
500 78
501 98
and the second lookup table looks like
df2
L1 H1 L2 H2 L3 H3 L4 H4 L5 H5 Name
1 3 5 6 11 78 86 88 90 90 A
4 4 7 10 79 85 91 99 110 120 B
89 89 91 109 0 0 0 0 0 0 C
...
What I am trying to do is to get Name from df2 to df1 when Item in df1 falls between the Low (L) and High (H) columns. Something (which does not work) like:
df1[Name]=np.where((df1['Item']>=df2['L1'] & df1['Item']<=df2['H1'])|
(df1['Item']>=df2['L2'] & df1['Item']<=df2['H2']) |
(df1['Item']>=df2['L3'] & df1['Item']<=df2['H3']) |
(df1['Item']>=df2['L4'] & df1['Item']<=df2['H4']) |
(df1['Item']>=df2['L5'] & df1['Item']<=df2['H5']) |
(df1['Item']>=df2['L6'] & df1['Item']<=df2['H6']), df2['Name'], "Other")
So that the result would be like:
Item Value Name
1 23 A
2 3 A
3 45 A
4 65 B
5 17 A
6 6 A
7 18 A
… … …
500 78 K
501 98 Other
If you have any guidance for my problem to share, I would much appreciate it! Thank you in advance!
Try:
Transform df2 using wide_to_long
Create lists of numbers from "L" to "H" for each row using apply and range
explode to have one value in each row
map each "Item" in df1 using a dict created from ranges with the structure {value: name}
ranges = pd.wide_to_long(df2, ["L","H"], i="Name", j="Subset")
ranges["values"] = ranges.apply(lambda x: list(range(x["L"], x["H"]+1)), axis=1)
ranges = ranges.explode("values").reset_index()
df1["Name"] = df1["Item"].map(dict(zip(ranges["values"], ranges["Name"])))
>>> df1
Item Value Name
0 1 23 A
1 2 3 A
2 3 45 A
3 4 65 B
4 5 17 A
5 6 6 A
6 7 18 B
7 500 78 NaN
8 501 98 NaN
A faster option (tests can prove/debunk that), would be to use conditional_join from pyjanitor (conditional_join uses binary search underneath the hood):
#pip install pyjanitor
import pandas as pd
import janitor
temp = (pd.wide_to_long(df2,
stubnames=['L', 'H'],
i='Name',
j='Num')
.reset_index('Name')
)
# the `Num` index is sorted already
(df1.conditional_join(
temp,
# left column, right column, join operator
('Item', 'L', '>='),
('Item', 'H', '<='),
how = 'left')
.loc[:, ['Item', 'Value', 'Name']]
)
Item Value Name
0 1 23 A
1 2 3 A
2 3 45 A
3 4 65 B
4 5 17 A
5 6 6 A
6 7 18 B
7 500 78 NaN
8 501 98 NaN

Python Pandas calculate total volume with last article volume

I have the following problem and do not know how to solve it in a perfomant way:
Input Pandas DataFrame:
timestep
article
volume
35
1
20
37
2
5
123
2
12
155
3
10
178
2
23
234
1
17
478
1
28
Output Pandas DataFrame:
timestep
volume
35
20
37
25
123
32
178
53
234
50
478
61
Calculation Example for timestep 478:
28 (last article 1 volume) + 23 (last article 2 volume) + 10 (last article 3 volume) = 61
What ist the best way to do this in pandas?
Try with ffill:
#sort if needed
df = df.sort_values("timestep")
df["volume"] = (df["volume"].where(df["article"].eq(1)).ffill().fillna(0) +
df["volume"].where(df["article"].eq(2)).ffill().fillna(0))
output = df.drop("article", axis=1)
>>> output
timestep volume
0 35 20.0
1 37 25.0
2 123 32.0
3 178 43.0
4 234 40.0
5 478 51.0
Group By article & Take last element & Sum
df.groupby(['article']).tail(1)["volume"].sum()
You can set group number of consecutive article by .cumsum(). Then get the value of previous group last item by .map() with GroupBy.last(). Finally, add volume with this previous last, as follows:
# Get group number of consecutive `article`
g = df['article'].ne(df['article'].shift()).cumsum()
# Add `volume` to previous group last
df['volume'] += g.sub(1).map(df.groupby(g)['volume'].last()).fillna(0, downcast='infer')
Result:
print(df)
timestep article volume
0 35 1 20
1 37 2 25
2 123 2 32
3 178 2 43
4 234 1 40
5 478 1 51
Breakdown of steps
Previous group last values:
g.sub(1).map(df.groupby(g)['volume'].last()).fillna(0, downcast='infer')
0 0
1 20
2 20
3 20
4 43
5 43
Name: article, dtype: int64
Try:
df["new_volume"] = (
df.loc[df["article"] != df["article"].shift(-1), "volume"]
.reindex(df.index, method='ffill')
.shift()
+ df["volume"]
).fillna(df["volume"])
df
Output:
timestep article volume new_volume
0 35 1 20 20.0
1 37 2 5 25.0
2 123 2 12 32.0
3 178 2 23 43.0
4 234 1 17 40.0
5 478 1 28 51.0
Explained:
Find the last record of each group by checking the 'article' from the previous row, then reindex that series aligning to the original dataframe and fill forward and shift to the next group with that 'volume'. And this to the current row's 'volume' and fill that first value with the original 'volume' value.

Average of every x rows with a step size of y per each subset using pandas

I have a pandas data frame like this:
Subset Position Value
1 1 2
1 10 3
1 15 0.285714
1 43 1
1 48 0
1 89 2
1 132 2
1 152 0.285714
1 189 0.133333
1 200 0
2 1 0.133333
2 10 0
2 15 2
2 33 2
2 36 0.285714
2 72 2
2 132 0.133333
2 152 0.133333
2 220 3
2 250 8
2 350 6
2 750 0
I want to know how can I get the mean of values for every "x" row with "y" step size per subset in pandas?
For example, mean of every 5 rows (step size =2) for value column in each subset like this:
Subset Start_position End_position Mean
1 1 48 1.2571428
1 15 132 1.0571428
1 48 189 0.8838094
2 1 36 0.8838094
2 15 132 1.2838094
2 36 220 1.110476
2 132 350 3.4533332
Is this what you were looking for:
df = pd.DataFrame({'Subset': [1]*10+[2]*12,
'Position': [1,10,15,43,48,89,132,152,189,200,1,10,15,33,36,72,132,152,220,250,350,750],
'Value': [2,3,.285714,1,0,2,2,.285714,.1333333,0,0.133333,0,2,2,.285714,2,.133333,.133333,3,8,6,0]})
averaged_df = pd.DataFrame(columns=['Subset', 'Start_position', 'End_position', 'Mean'])
window = 5
step_size = 2
for subset in df.Subset.unique():
subset_df = df[df.Subset==subset].reset_index(drop=True)
for i in range(0,len(df),step_size):
window_rows = subset_df.iloc[i:i+window]
if len(window_rows) < window:
continue
window_average = {'Subset': window_rows.Subset.loc[0+i],
'Start_position': window_rows.Position[0+i],
'End_position': window_rows.Position.iloc[-1],
'Mean': window_rows.Value.mean()}
averaged_df = averaged_df.append(window_average,ignore_index=True)
Some notes about the code:
It assumes all subsets are in order in the original df (1,1,2,1,2,2 will behave as if it was 1,1,1,2,2,2)
If there is a group left that's smaller than a window, it will skip it (e.g. 1, 132, 200, 0,60476 is not included`)
One version specific answer would be, using pandas.api.indexers.FixedForwardWindowIndexer introduced in pandas 1.1.0:
>>> window=5
>>> step=2
>>> indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=window)
>>> df2 = df.join(df.Position.shift(-(window-1)), lsuffix='_start', rsuffix='_end')
>>> df2 = df2.assign(Mean=df2.pop('Value').rolling(window=indexer).mean()).iloc[::step]
>>> df2 = df2[df2.Position_start.lt(df2.Position_end)].dropna()
>>> df2['Position_end'] = df2['Position_end'].astype(int)
>>> df2
Subset Position_start Position_end Mean
0 1 1 48 1.257143
2 1 15 132 1.057143
4 1 48 189 0.883809
10 2 1 36 0.883809
12 2 15 132 1.283809
14 2 36 220 1.110476
16 2 132 350 3.453333

Sumifs excel formula in Pandas

I have seen a lot of SUMIFS question being answered here but is very different from the one I need.
1st Trade data frame contains transaction id and C_ID
transaction C_ID
1 101
2 103
3 104
4 101
5 102
6 104
2nd Customer data frame contains C_ID, On/Off, Amount
C_ID On/Off Amount
102 On 320
101 On 400
101 On 200
103 On 60
104 Off 80
104 On 100
So i want to calculate the Amount based on the C_ID with a condition on column 'On/Off' in Customer data frame. The resulting trade data frame should be
transaction C_ID Amount
1 101 600
2 103 60
3 104 100
4 101 600
5 102 320
6 104 100
So here’s the formula in EXCEL on how Amount are calculated
=SUMIFS(Customer.Amount, Customer.C_ID = Trade.C_ID, Customer.On/Off = On)
So i want to replicate this particular formula in Python using Pandas
You can use groupby() on filtered data to compute the sum and map to assign new column to transaction data.
s = df2[df2['On/Off']=='On'].groupby('C_ID')['Amount'].sum()
df1['Amount'] = df1['C_ID'].map(s)
We do filter groupby + reindex assign
df1['Amount']=df2.loc[df2['On/Off']=='On'].groupby(['C_ID']).Amount.sum().reindex(df1.C_ID).tolist()
df1
Out[340]:
transaction C_ID Amount
0 1 101 600
1 2 103 60
2 3 104 100
3 4 101 600
4 5 102 320
5 6 104 100

Pivot table operations on pandas dataframe

I have the foll. dataframe in pandas:
df
DAY YEAR REGION VALUE
1 2000 A 12
2 2000 A 10
3 2000 A 13
6 2000 A 15
1 2001 A 3
2 2001 A 40
3 2001 A 83
4 2001 A 95
1 2000 B 124
3 2000 B 102
5 2000 B 131
8 2000 B 150
1 2001 B 30
5 2001 B 4
8 2001 B 8
9 2001 B 12
I would like to create a new data frame such that each row contains a distinct combination of YEAR and REGION. It also contains a column which sums up the VALUE for that YEAR, REGION combination and another column which provides the maximum VALUE for the YEAR, REGION combination. The result should look like:
YEAR REGION SUM_VALUE MAX_VALUE
2000 A 50 15
2001 A 221 95
2000 B 507 150
2001 B 54 30
Here is what I am doing:
new_df = pandas.DataFrame()
for yr in df.YEAR.unique():
for reg in df.REGION.unique():
new_df = new_df.append({'YEAR': yr}, ignore_index=True)
new_df = new_df.append({'REGION: reg}, ignore_index=True)
However, this creates a new row each time, and is not very pythonic due to the xtra for loops. Is there a better way to proceed?
Please note that this is a toy dataframe, the actual dataframe has several VALUE columns. The proposed solution should scale, without having to manually specify the names of the VALUE columns.
groupby on 'YEAR' and 'REGION' and pass a list of funcs to call using agg:
In [9]:
df.groupby(['YEAR','REGION'])['VALUE'].agg(['sum','max']).reset_index()
Out[9]:
YEAR REGION sum max
0 2000 A 50 15
1 2000 B 507 150
2 2001 A 221 95
3 2001 B 54 30
EDIT:
If you want to name the aggregated columns, pass a dict:
In [18]:
df.groupby(['YEAR','REGION'])['VALUE'].agg({'sum_VALUE':'sum','max_VALUE':'max'}).reset_index()
Out[18]:
YEAR REGION max_VALUE sum_VALUE
0 2000 A 15 50
1 2000 B 150 507
2 2001 A 95 221
3 2001 B 30 54

Categories

Resources