If else conditions to create a new column in dataframe in python - python

I have a data frame like this.
FOOD_ID Cumulative_addition
0 110 0
1 110 15
2 110 15
3 110 35
4 111 0
5 111 10
6 111 10
I want to add another column that gives the addition only for a particular FOOD ID. The final data frame that I want looks like below....
FOOD_ID Cumulative_addition Addition_Only
0 110 0 0
1 110 15 15
2 110 15 0
3 110 35 20
4 111 0 0
5 111 10 10
6 111 10 0
I know how to do this in excel using if statement but do not know how to do it in python.

Try :
df['Addition_only'] = (df.groupby('FOOD_ID').Cumulative_addition.shift(-1) - df.Cumulative_addition).shift(1).fillna(0)
Detail
df.groupby('FOOD_ID').Cumulative_addition.shift(-1)
Will give group the cumulative addition column grouped by food id and then shift it by 1 row.
The you can subtract the original column to get the diff and shift it back by one row.
Hope that helps.

Related

Map two pandas dataframe and add a column to the first dataframe

I have posted two sample dataframes. I would like to map one column of a dataframe with respect to the index of a column in another dataframe and place the values back to the first dataframe shown as below
A = np.array([0,1,1,3,5,2,5,4,2,0])
B = np.array([55,75,86,98,100,111])
df1 = pd.Series(A, name='data').to_frame()
df2 = pd.Series(B, name='values_for_replacement').to_frame()
The below is the first dataframe df1
data
0 0
1 1
2 1
3 3
4 5
5 2
6 5
7 4
8 2
9 0
And the below is the second dataframe df2
values_for_replacement
0 55
1 75
2 86
3 98
4 100
5 111
The below is the output needed (Mapped with respect to the index of the df2)
data new_data
0 0 55
1 1 75
2 1 75
3 3 98
4 5 111
5 2 86
6 5 111
7 4 100
8 2 86
9 0 55
I would kindly like to know how one can achieve this using some pandas functions like map.
Looking forward for some answers. Many thanks in advance

how to groupby and aggregate dynamic columns in pandas

I have following dataframe in pandas
code tank nozzle_1 nozzle_2 nozzle_var nozzle_sale
123 1 1 1 10 10
123 1 2 2 12 10
123 2 1 1 10 10
123 2 2 2 12 10
123 1 1 1 10 10
123 2 2 2 12 10
Now, I want to generate cumulative sum of all the columns grouping over tank and take out the last observation. nozzle_1 and nozzle_2 columns are dynamic, it could be nozzle_3, nozzle_4....nozzle_n etc. I am doing following in pandas to get the cumsum
## Below code for calculating cumsum of dynamic columns nozzle_1 and nozzle_2
cols= df.columns[df.columns.str.contains(pat='nozzle_\d+$', regex=True)]
df.assign(**df.groupby('tank')[cols].agg(['cumsum'])\
.pipe(lambda x: x.set_axis(x.columns.map('_'.join), axis=1, inplace=False)))
## nozzle_sale_cumsum is static column
df[nozzle_sale_cumsum] = df.groupby('tank')['nozzle_sale'].cumsum()
From above code I will get cumsum of following columns
tank nozzle_1 nozzle_2 nozzle_var nozzle_1_cumsum nozzle_2_cumsum nozzle_sale_cumsum
1 1 1 10 1 1 10
1 2 2 12 3 3 20
2 1 1 10 1 1 10
2 2 2 12 3 3 20
1 1 1 10 4 4 30
2 2 2 12 5 5 30
Now, I want to get last values of all 3 cumsum columns grouping over tank. I can do it with following code in pandas, but it is hard coded with column names.
final_df= df.groupby('tank').agg({'nozzle_1_cumsum':'last',
'nozzle_2_cumsum':'last',
'nozzle_sale_cumsum':'last',
}).reset_index()
Problem with above code is nozzle_1_cumsum and nozzle_2_cumsum is hard coded which is not the case. How can I do this in pandas with dynamic columns.
How about:
df.filter(regex='_cumsum').groupby(df['tank']).last()
Output:
nozzle_1_cumsum nozzle_2_cumsum nozzle_sale_cumsum
tank
1 4 4 30
2 5 5 30
You can also replace df.filter(...) by, e.g., df.iloc[:,-3:] or df[col_names].

pandas modifying sections then recombining

I have been on modifying an excel document with Pandas. I only need to work with small sections at a time, and breaking each into a separate DataFrame and then recombining back into the whole after modifying seems like the best solution. Is this feasible? I've tried a couple options with merge() and concat() but they don't seem to give me the results I am looking for.
As previously stated, I've tried using the merge() function to recombine them back together with the larger table I just got a memory error, and when I tested it with smaller dataframes, rows weren't maintained.
here's an small scale example of what I am looking to do:
import pandas as pd
df1 = pd.DataFrame({'A':[1,2,3,5,6],'B':[3,10,11,13,324],'C':[64,'','' ,'','' ],'D':[32,45,67,80,100]})#example df
print(df1)
df2= df1[['B','C']]#section taken
df2.at[2,'B'] = 1#modify area
print(df2)
df1 = df1.merge(df2)#merge dataframes
print(df1)
output:
A B C D
0 1 3 64 32
1 2 10 45
2 3 11 67
3 5 13 80
4 6 324 100
B C
0 3 64
1 10
2 1
3 13
4 324
A B C D
0 1 3 64 32
1 2 10 45
2 5 13 80
3 6 324 100
what I would like to see
A B C D
0 1 3 64 32
1 2 10 45
2 3 11 67
3 5 13 80
4 6 324 100
B C
0 3 64
1 10
2 1
3 13
4 324
A B C D
0 1 3 64 32
1 2 10 45
2 3 1 67
3 5 13 80
4 6 324 100
as I said before,in my actual code I just get a memoryerror if I try this due to the size of the dataframe
No need for merging here, you can just re-assign back the values from df2 into df1:
...
df1.loc[df2.index, df2.columns] = df2 #recover changes into original dataframe
print(df1)
giving as expected:
A B C D
0 1 3 64 32
1 2 10 45
2 3 1 67
3 5 13 80
4 6 324 100
df1.update(df2) gives same result (thanks to Quang Hoang for the precision)

Drop rows after maximum value in a grouped Pandas dataframe

I've got a date-ordered dataframe that can be grouped. What I am attempting to do is groupby a variable (Person), determine the maximum (weight) for each group (person), and then drop all rows that come after (date) the maximum.
Here's an example of the data:
df = pd.DataFrame({'Person': 1,1,1,1,1,2,2,2,2,2],'Date': '1/1/2015','2/1/2015','3/1/2015','4/1/2015','5/1/2015','6/1/2011','7/1/2011','8/1/2011','9/1/2011','10/1/2011'], 'MonthNo':[1,2,3,4,5,1,2,3,4,5], 'Weight':[100,110,115,112,108,205,210,211,215,206]})
Date MonthNo Person Weight
0 1/1/2015 1 1 100
1 2/1/2015 2 1 110
2 3/1/2015 3 1 115
3 4/1/2015 4 1 112
4 5/1/2015 5 1 108
5 6/1/2011 1 2 205
6 7/1/2011 2 2 210
7 8/1/2011 3 2 211
8 9/1/2011 4 2 215
9 10/1/2011 5 2 206
Here's what I want the result to look like:
Date MonthNo Person Weight
0 1/1/2015 1 1 100
1 2/1/2015 2 1 110
2 3/1/2015 3 1 115
5 6/1/2011 1 2 205
6 7/1/2011 2 2 210
7 8/1/2011 3 2 211
8 9/1/2011 4 2 215
I think its worth noting, there can be disjoint start dates and the maximum may appear at different times.
My idea was to find the maximum for each group, obtain the MonthNo the maximum was in for that group, and then discard any rows with MonthNo greater Max Weight MonthNo. So far I've been able to obtain the max by group, but cannot get past doing a comparison based on that.
Please let me know if I can edit/provide more information, haven't posted many questions here! Thanks for the help, sorry if my formatting/question isn't clear.
Using idxmax with groupby
df.groupby('Person',sort=False).apply(lambda x : x.reset_index(drop=True).iloc[:x.reset_index(drop=True).Weight.idxmax()+1,:])
Out[131]:
Date MonthNo Person Weight
Person
1 0 1/1/2015 1 1 100
1 2/1/2015 2 1 110
2 3/1/2015 3 1 115
2 0 6/1/2011 1 2 205
1 7/1/2011 2 2 210
2 8/1/2011 3 2 211
3 9/1/2011 4 2 215
You can use groupby.transform with idxmax. The first 2 steps may not be necessary depending on how your dataframe is structured.
# convert Date to datetime
df['Date'] = pd.to_datetime(df['Date'])
# sort by Person and Date to make index usable for next step
df = df.sort_values(['Person', 'Date']).reset_index(drop=True)
# filter for index less than idxmax transformed by group
df = df[df.index <= df.groupby('Person')['Weight'].transform('idxmax')]
print(df)
Date MonthNo Person Weight
0 2015-01-01 1 1 100
1 2015-02-01 2 1 110
2 2015-03-01 3 1 115
5 2011-06-01 1 2 205
6 2011-07-01 2 2 210
7 2011-08-01 3 2 211
8 2011-09-01 4 2 215

Create a new column based on the state of another column in pandas

Suppose I want to create a new column that counts the number of days since the state was 1. As an example, the current columns would be the first three below. The forth column is what I'm trying to get.
Index State Days Since_Days
1 1 0 0
2 0 20 20
3 0 40 40
4 1 55 55
5 1 60 5
6 1 70 10
Without resorting to for-loop, what is a pandas way to approach this?
You can also try following where first you group by State and for those that have State == 1, fill by difference. Then, for those which has State == 0 will be na which can be filled by corresponding Days column value
df.loc[df.State == 1, 'Since_Days'] = df.groupby('State')['Days'].diff().fillna(0)
df['Since_Days'].fillna(df['Days'],inplace=True)
print(df)
Result:
Index State Days Since_Days
0 1 1 0 0.0
1 2 0 20 20.0
2 3 0 40 40.0
3 4 1 55 55.0
4 5 1 60 5.0
5 6 1 70 10.0
The values to be subtracted can be formed with:
ser = df['Days'].where(df['State']==1, np.nan).ffill().shift()
If you subtract this from the original Days column, you'll have:
df['Days'].sub(ser, fill_value=0).astype('int')
Out:
0 0
1 20
2 40
3 55
4 5
5 10
Name: Days, dtype: int64

Categories

Resources