Creating a difference of the data frame with 1 col in python - python

I am new to python and trying to replicate things in python that are done in excel.
I want to take difference of the col(A,B,C) with the mean(fixed)
Unnamed:0 A B C mean Mean diffA
0 2020-08-28 1 6 11 6.0 -5.0
1 2020-08-29 2 7 12 7.0 -5.0
2 2020-08-30 3 8 13 8.0 -5.0
3 2020-08-31 4 9 14 9.0 -5.0
4 2020-09-01 5 10 15 10.0 -5.0
1 way is to manually put in the col name and find the difference, but is there any other less manual way?
new_df['Mean diffA']=new_df['A']-new_df['mean']

You can subtract the mean from a range of columns:
diffs = new_df.loc[:, 'A':'C'].subtract(new_df['mean'], axis=0)
Then combine the differences and the original DataFrame:
new_df.join(diffs, rsuffix='_mean')

You have a few options to do this. I tried the following and it worked.
import pandas as pd
dt = {'START DATE':['2020-08-28','2020-08-29','2020-08-30',
'2020-08-31','2020-09-01'],
'A':[1,2,3,4,5],
'B':[6,7,8,9,10],
'C':[11,12,13,14,15]}
df = pd.DataFrame(dt)
df['Mean'] = df.loc[:,'A':'C'].mean(axis=1)
df[['dA','dB','dC']] = df.loc[:, 'A':'C'].subtract(df['Mean'], axis=0)
print(df)
OR you can try and do something like this as well
df[['dA','dB','dC']] = df.loc[:,'A':'C'] - df[['Mean','Mean','Mean']].values
print(df)
Both of these will provide the same output:
START DATE A B C Mean dA dB dC
0 2020-08-28 1 6 11 6.0 -5.0 0.0 5.0
1 2020-08-29 2 7 12 7.0 -5.0 0.0 5.0
2 2020-08-30 3 8 13 8.0 -5.0 0.0 5.0
3 2020-08-31 4 9 14 9.0 -5.0 0.0 5.0
4 2020-09-01 5 10 15 10.0 -5.0 0.0 5.0
The second option is not a good way to use it. Pandas provides you subtract function. Use this.

Related

Count of daily activities per machine based on a lambda conditional in pandas

I have the following dataset:
my_df = pd.DataFrame({'id':[1,2,3,4,5,6,7,8,9],
'machine':['A','A','A','B','B','A','B','B','A'],
'prod':['button','tack','pin','button','tack','pin','clip','clip','button'],
'qty':[100,50,30,70,60,15,200,180,np.nan],
'hours':[4,3,1,3,2,0.5,5,6,np.nan],
'day':[1,1,1,1,1,1,2,2,2]})
my_df['prod_rate']=my_df['qty']/my_df['hours']
my_df
id machine prod qty hours day prod_rate
0 1 A button 100.0 4.0 1 25.000000
1 2 A tack 50.0 3.0 1 16.666667
2 3 A pin 30.0 1.0 1 30.000000
3 4 B button 70.0 3.0 1 23.333333
4 5 B tack 60.0 2.0 1 30.000000
5 6 A pin 15.0 0.5 1 30.000000
6 7 B clip 200.0 5.0 2 40.000000
7 8 B clip 180.0 6.0 2 30.000000
8 9 A button NaN NaN 2 NaN
And I want to count the daily activities, except when there is a NaN (which means that the machine was paralyzed due to failure).
I tried this code:
my_df['activities']=my_df.groupby(['day','machine'])['machine']\
.transform(lambda x: x['machine'].count() if x['qty'].notna() else np.nan)
But it returns me an error: KeyError: 'qty'
This is the expected result:
id machine prod qty hours day prod_rate activities
0 1 A button 100.0 4.0 1 25.000000 4
1 2 A tack 50.0 3.0 1 16.666667 4
2 3 A pin 30.0 1.0 1 30.000000 4
3 4 B button 70.0 3.0 1 23.333333 2
4 5 B tack 60.0 2.0 1 30.000000 2
5 6 A pin 15.0 0.5 1 30.000000 4
6 7 B clip 200.0 5.0 2 40.000000 2
7 8 B clip 180.0 6.0 2 30.000000 2
8 9 A button NaN NaN 2 NaN NaN
Please, could you help me fix my lambda expression? It will help me for this question and for other operations too.
Although I prefer the solution from #steele-farnsworth, here is what OP requested. for the lambda to work
my_df['activities'] = my_df.groupby(['day','machine'])['qty']\
.transform(lambda x: x.count() if x.notna().all() else np.nan)
print(my_df)
Prints
id machine prod qty hours day prod_rate activities
0 1 A button 100.0 4.0 1 25.000000 4.0
1 2 A tack 50.0 3.0 1 16.666667 4.0
2 3 A pin 30.0 1.0 1 30.000000 4.0
3 4 B button 70.0 3.0 1 23.333333 2.0
4 5 B tack 60.0 2.0 1 30.000000 2.0
5 6 A pin 15.0 0.5 1 30.000000 4.0
6 7 B clip 200.0 5.0 2 40.000000 2.0
7 8 B clip 180.0 6.0 2 30.000000 2.0
8 9 A button NaN NaN 2 NaN NaN
You can do the calculation as normal, and then fill in the NaNs where they are wanted afterwards.
>>> my_df['activities'] = my_df.groupby(['day', 'machine'])['machine'].transform('count')
>>> my_df.loc[my_df['qty'].isna(), 'activities'] = np.NaN
>>> my_df
id machine prod qty hours day prod_rate activities
0 1 A button 100.0 4.0 1 25.000000 4.0
1 2 A tack 50.0 3.0 1 16.666667 4.0
2 3 A pin 30.0 1.0 1 30.000000 4.0
3 4 B button 70.0 3.0 1 23.333333 2.0
4 5 B tack 60.0 2.0 1 30.000000 2.0
5 6 A pin 15.0 0.5 1 30.000000 4.0
6 7 B clip 200.0 5.0 2 40.000000 2.0
7 8 B clip 180.0 6.0 2 30.000000 2.0
8 9 A button NaN NaN 2 NaN NaN
You should avoid using lambdas as much as possible in the context of Pandas, as they are not vectorized (and will therefore run slower) and are less communicative than using existing, idiomatic Pandas methods.

How to calculate totals in dataframe by pandas

I have got this dataframe:
Date Trader1 Trader2 Trader3
01/04/2020 4 6 8
02/04/2020 4 6 8
03/04/2020 4 7 8
04/04/2020 4 7 8
05/04/2020 3 5 7
06/04/2020 2 4 7
07/04/2020 2 3 6
08/04/2020 3 3 6
09/04/2020 3 5 7
10/04/2020 3 5 7
11/04/2020 3 5 6
I would like to get Totals for each column by using python/pandas library. When I apply a.loc['Total'] = pd.Series(a.sum()) I can get result as Totals for each column, but it also adds together values of Date column (dates). How can I calculate totals only for needed columns?
You can select only numeric columns by DataFrame.select_dtypes:
a.loc['Total'] = a.select_dtypes(np.number).sum()
You can remove column Date by DataFrame.drop:
a.loc['Total'] = a.drop('Date', axis=1).sum()
Or select all columns without first by positions by DataFrame.iloc:
a.loc['Total'] = a.iloc[:, 1:].sum()
print (a)
Date Trader1 Trader2 Trader3
0 01/04/2020 4.0 6.0 8.0
1 02/04/2020 4.0 6.0 8.0
2 03/04/2020 4.0 7.0 8.0
3 04/04/2020 4.0 7.0 8.0
4 05/04/2020 3.0 5.0 7.0
5 06/04/2020 2.0 4.0 7.0
6 07/04/2020 2.0 3.0 6.0
7 08/04/2020 3.0 3.0 6.0
8 09/04/2020 3.0 5.0 7.0
9 10/04/2020 3.0 5.0 7.0
10 11/04/2020 3.0 5.0 6.0
Total NaN 35.0 56.0 78.0
data[['Trader1','Trader2','Trader3']].sum()
I just saw your comment, There may be better ways, but I think this should work
data[data.columns[1:]].sum()
you have to provide the range in last line.

How to interpolate in Pandas using only previous values?

This is my dataframe:
df = pd.DataFrame(np.array([ [1,5],[1,6],[1,np.nan],[2,np.nan],[2,8],[2,4],[2,np.nan],[2,10],[3,np.nan]]),columns=['id','value'])
id value
0 1 5
1 1 6
2 1 NaN
3 2 NaN
4 2 8
5 2 4
6 2 NaN
7 2 10
8 3 NaN
This is my expected output:
id value
0 1 5
1 1 6
2 1 7
3 2 NaN
4 2 8
5 2 4
6 2 2
7 2 10
8 3 NaN
This is my current output using this code:
df.value.interpolate(method="krogh")
0 5.000000
1 6.000000
2 9.071429
3 10.171429
4 8.000000
5 4.000000
6 2.357143
7 10.000000
8 36.600000
Basically, I want to do two important things here:
Groupby ID then Interpolate using only above values not below row values
This should do the trick:
df["value_interp"]=df.value.combine_first(df.groupby("id")["value"].apply(lambda y: y.expanding().apply(lambda x: x.interpolate(method="krogh").to_numpy()[-1], raw=False)))
Outputs:
id value value_interp
0 1.0 5.0 5.0
1 1.0 6.0 6.0
2 1.0 NaN 7.0
3 2.0 NaN NaN
4 2.0 8.0 8.0
5 2.0 4.0 4.0
6 2.0 NaN 0.0
7 2.0 10.0 10.0
8 3.0 NaN NaN
(It interpolates based only on the previous values within the group - hence index 6 will return 0 not 2)
You can group by id and then loop over groups to make interpolations. For id = 2 interpolation will not give you value 2
import pandas as pd
import numpy as np
df = pd.DataFrame(np.array([ [1,5],[1,6],[1,np.nan],[2,np.nan],[2,8],[2,4],[2,np.nan],[2,10],[3,np.nan]]),columns=['id','value'])
data = []
for name, group in df.groupby('id'):
group_interpolation = group.interpolate(method='krogh', limit_direction='forward', axis=0)
data.append(group_interpolation)
df = (pd.concat(data)).round(1)
Output:
id value
0 1.0 5.0
1 1.0 6.0
2 1.0 7.0
3 2.0 NaN
4 2.0 8.0
5 2.0 4.0
6 2.0 4.7
7 2.0 10.0
8 3.0 NaN
Current pandas.Series.interpolate does not support what you want so to achieve your goal you need to do 2 grouby's that will account for your desire to use only previous rows. The idea is as follows: to combine into one group only missing value (!!!) and previous rows (it might have limitations if you have several missing values in a row, but it serves well for your toy example)
Suppose we have a df:
print(df)
ID Value
0 1 5.0
1 1 6.0
2 1 NaN
3 2 NaN
4 2 8.0
5 2 4.0
6 2 NaN
7 2 10.0
8 3 NaN
Then we will combine any missing values within a group with previous rows:
df["extrapolate"] = df.groupby("ID")["Value"].apply(lambda grp: grp.isnull().cumsum().shift().bfill())
print(df)
ID Value extrapolate
0 1 5.0 0.0
1 1 6.0 0.0
2 1 NaN 0.0
3 2 NaN 1.0
4 2 8.0 1.0
5 2 4.0 1.0
6 2 NaN 1.0
7 2 10.0 2.0
8 3 NaN NaN
You may see, that when grouped by ["ID","extrapolate"] the missing value will fall into the same group as nonnull values of previous rows.
Now we are ready to do extrapolation (with spline of order=1):
df.groupby(["ID","extrapolate"], as_index=False).apply(lambda grp:grp.interpolate(method="spline",order=1)).drop("extrapolate", axis=1)
ID Value
0 1.0 5.0
1 1.0 6.0
2 1.0 7.0
3 2.0 NaN
4 2.0 8.0
5 2.0 4.0
6 2.0 0.0
7 2.0 10.0
8 NaN NaN
Hope this helps.

How to add the sum of some column at the end of dataframe

I have a pandas dataframe with 11 columns. I want to add the sum of all values of columns 9 and column 10 to the end of table. So far I tried 2 methods:
Assigning the data to the cell with dataframe.iloc[rownumber, 8]. This results in an out of bound error.
Creating a vector with some blank: ' ' by using the following code:
total = ['', '', '', '', '', '', '', '', dataframe['Column 9'].sum(), dataframe['Column 10'].sum(), '']
dataframe = dataframe.append(total)
The result was not nice as it added the total vector as a vertical vector at the end rather than a horizontal one. What can I do to solve the issue?
You need use pandas.DataFrame.append with ignore_index=True
so use:
dataframe=dataframe.append(dataframe[['Column 9','Column 10']].sum(),ignore_index=True).fillna('')
Example:
import pandas as pd
import numpy as np
df=pd.DataFrame()
df['col1']=[1,2,3,4]
df['col2']=[2,3,4,5]
df['col3']=[5,6,7,8]
df['col4']=[5,6,7,8]
Using Append:
df=df.append(df[['col2','col3']].sum(),ignore_index=True)
print(df)
col1 col2 col3 col4
0 1.0 2.0 5.0 5.0
1 2.0 3.0 6.0 6.0
2 3.0 4.0 7.0 7.0
3 4.0 5.0 8.0 8.0
4 NaN 14.0 26.0 NaN
Whitout NaN values:
df=df.append(df[['col2','col3']].sum(),ignore_index=True).fillna('')
print(df)
col1 col2 col3 col4
0 1 2.0 5.0 5
1 2 3.0 6.0 6
2 3 4.0 7.0 7
3 4 5.0 8.0 8
4 14.0 26.0
Create new DataFrame with sums. This example DataFrame has columns 'a' and 'b'. df1 is the DataFrame what need to be summed up and df3 is one line DataFrame only with sums:
data = [[df1.a.sum(),df1.b.sum()]]
df3 = pd.DataFrame(data,columns=['a','b'])
Then append it to end:
df1.append(df3)
simply try this:(replace test with your dataframe name)
row wise sum(which you have asked for):
test['Total'] = test[['col9','col10']].sum(axis=1)
print(test)
column wise sum:
test.loc['Total'] = test[['col9','col10']].sum()
test.fillna('',inplace=True)
print(test)
IICU , this is what you need (change numbers 8 & 9 to suit your needs)
df['total']=df.iloc[ : ,[8,9]].sum(axis=1) #horizontal sum
df['total1']=df.iloc[ : ,[8,9]].sum().sum() #Vertical sum
df.loc['total2']=df.iloc[ : ,[8,9]].sum() # vertical sum in rows for only columns 8 & 9
Example
a=np.arange(0, 11, 1)
b=np.random.randint(10, size=(5,11))
df=pd.DataFrame(columns=a, data=b)
0 1 2 3 4 5 6 7 8 9 10
0 0 5 1 3 4 8 6 6 8 1 0
1 9 9 8 9 9 2 3 8 9 3 6
2 5 7 9 0 8 7 8 8 7 1 8
3 0 7 2 8 8 3 3 0 4 8 2
4 9 9 2 5 2 2 5 0 3 4 1
**output**
0 1 2 3 4 5 6 7 8 9 10 total total1
0 0.0 5.0 1.0 3.0 4.0 8.0 6.0 6.0 8.0 1.0 0.0 9.0 48.0
1 9.0 9.0 8.0 9.0 9.0 2.0 3.0 8.0 9.0 3.0 6.0 12.0 48.0
2 5.0 7.0 9.0 0.0 8.0 7.0 8.0 8.0 7.0 1.0 8.0 8.0 48.0
3 0.0 7.0 2.0 8.0 8.0 3.0 3.0 0.0 4.0 8.0 2.0 12.0 48.0
4 9.0 9.0 2.0 5.0 2.0 2.0 5.0 0.0 3.0 4.0 1.0 7.0 48.0
total2 NaN NaN NaN NaN NaN NaN NaN NaN 31.0 17.0 NaN NaN NaN

Computing the difference between first and last values in a rolling window

I am using the Pandas rolling window tool on a one-column dataframe whose index is in datetime form.
I would like to compute, for each window, the difference between the first value and the last value of said window. How do I refer to the relative index when giving a lambda function? (in the brackets below)
df2 = df.rolling('3s').apply(...)
IIUC:
In [93]: df = pd.DataFrame(np.random.randint(10,size=(9, 3)))
In [94]: df
Out[94]:
0 1 2
0 7 4 5
1 9 9 3
2 1 7 6
3 0 9 2
4 2 3 7
5 6 7 1
6 1 0 1
7 8 4 7
8 0 0 9
In [95]: df.rolling(window=3).apply(lambda x: x[0]-x[-1])
Out[95]:
0 1 2
0 NaN NaN NaN
1 NaN NaN NaN
2 6.0 -3.0 -1.0
3 9.0 0.0 1.0
4 -1.0 4.0 -1.0
5 -6.0 2.0 1.0
6 1.0 3.0 6.0
7 -2.0 3.0 -6.0
8 1.0 0.0 -8.0

Categories

Resources