subtract between two rows - python

I have a dataset similar to this:
name group val1 val2
John A 3 2
Cici B 4 3
Ian C 2 2
Zhang D 2 1
Zhang E 1 2
Ian F 1 2
John B 2 1
Ian B 1 2
I did a pivot table and it now looks like this using this piece of code
df_pivot = pd.pivot_table(df, values=['val_1, val_2], index=['name', 'group']).reset_index()
df
name group val1 val2
John A 3 2
John B 2 1
Ian C 2 2
Ian F 1 2
Ian B 1 2
Zhang D 2 1
Zhang E 1 2
Cici B 4 3
After the pivot table, I need to calculate 1) groupby name 2) calculate the delta between groups. Take John as an example
The output should be:
John A-B 1 1
Ian C-F 1 0
F-B 0 0
B-C 1 0 (the delta is -1, but we only do absolute value)
How to move forward from my pivot table

Getting each combination to subtract (a-b, a-c, b-c) won't be directly possible with a simple groupby function. I suggest that you pivot your data and use a custom function to calculate each combination of possible differences:
import pandas as pd
import itertools
def combo_subtraction(df, level=0):
unique_groups = df.columns.levels[level]
combos = itertools.combinations(unique_groups, 2)
pieces = {}
for g1, g2 in combos:
name = "{}-{}".format(g1, g2)
pieces[name] = df.xs(g1, level=level, axis=1) - df.xs(g2, level=level, axis=1)
return pd.concat(pieces)
out = (df.pivot(index="name", columns="group") # convert data to wide format
.pipe(combo_subtraction, level=1) # apply our combination subtraction
.dropna() # clean up the result
.swaplevel()
.sort_index())
print(out)
val1 val2
name
Ian A-B 0.0 0.0
A-C -1.0 0.0
B-C -1.0 0.0
John A-B 1.0 1.0
Zhang A-B 1.0 -1.0
The combo_subtraction function simply iterates over all possible combinations of 2 of "A", "B", and "C" and performs the subtraction operation. It then sticks the results of these combinations back together forming our result.

Related

Comparing two data frames columns and addition of matching values

I have two data frames with similar data, and I would like to substract matching values. Example :
df1:
Letter FREQ Diff
0 A 20 NaN
1 B 12 NaN
2 C 5 NaN
3 D 4 NaN
df2:
Letter FREQ
0 A 19
1 B 11
3 D 2
If we can find the same letter in the column "Letter", I would like to create a new column with the subtraction of the two frequency columns.
Expected output :
df1:
Letter FREQ Diff
0 A 20 1
1 B 12 1
2 C 5 5
3 D 4 2
I have tried to begin like this, but obviously it doesn't work
for i in df1.Letter:
for j in df2.Letter:
if i == j:
df1.Difference[j] == (df1.Frequency[i] - df2.Frequency[j])
else:
pass
Thank you for your help!
Use df.merge with fillna:
In [1101]: res = df1.merge(df2, on='Letter', how='outer')
In [1108]: res['difference'] = (res.Frequency_x - res.Frequency_y).fillna(res.Frequency_x)
In [1110]: res = res.drop('Frequency_y', 1).rename(columns={'Frequency_x': 'Frequency'})
In [1111]: res
Out[1111]:
Letter Frequency difference
0 A 20 1.0
1 B 12 1.0
2 C 5 5.0
3 D 4 2.0

Is there a cleaner way to write the code for conditional replacement of outlier values with the group mean in a dataframe

For the DF below - in the Value Column, Product 3(i.e, 100) and Product 4 (i.e. 98) have amounts that are outliers. I want to
group by ['Class']
obtain the mean of the [Value] excluding the outlier amount
replace the outlier amount with the mean calculated in step 2.
Any suggestions of how to structure the code greatly appreciated. I have my code that works given the sample table, but I have a feeling that when I implement in the real solution it might not work.
Product,Class,Value
0 1 A 5
1 2 A 4
2 3 A 100
3 4 B 98
4 5 B 20
5 6 B 25
My code implementation:
# Establish the condition to remove the outlier rows from the DF
stds = 1.0
filtered_df = df[~df.groupby('Class')['Value'].transform(lambda x: abs((x-x.mean()) / x.std()) > stds)]
Output:
Product Class Value
0 1 A 5
1 2 A 4
4 5 B 20
5 6 B 25
# compute mean of each class without the outliers
class_means = filtered_df[['Class', 'Value']].groupby(['Class'])['Value'].mean()
Output:
Class
A 4.5
B 22.5
#extract rows in DF that are outliers and fail the test
outlier_df = df[df.groupby('Class')['Value'].transform(lambda x: abs((x-x.mean()) / x.std()) > stds)]
outlier_df
Output:
Product Class Value
2 3 A 100
3 4 B 98
#replace outlier values with computed means grouped by class
outlier_df['Value'] = np.where((outlier_df.Class == class_means.index), class_means,outlier_df.Value)
outlier_df
Output:
Product Class Value
2 3 A 4.5
3 4 B 22.5
#recombine cleaned dataframes
df_cleaned = pd.concat([filtered_df,outlier_df], axis=0 )
df_cleaned
Output:
Product Class Value
0 1 A 5.0
1 2 A 4.0
4 5 B 20.0
5 6 B 25.0
2 3 A 4.5
3 4 B 22.5
Proceed as follows:
Start from your code:
stds = 1.0
Save your lambda function under a variable:
isOutlier = lambda x: abs((x - x.mean()) / x.std()) > stds
Define the following function, to be applied to each group:
def newValue(grp):
val = grp.Value
outl = isOutlier(val)
return val.mask(outl, val[~outl].mean())
Generate new Value column:
df.Value = df.groupby('Class', group_keys=False).apply(newValue)
The result is:
Product Class Value
0 1 A 5.0
1 2 A 4.0
2 3 A 4.5
3 4 B 22.5
4 5 B 20.0
5 6 B 25.0
You even don't lose the original row order.
Edit
Or you can "incorporate" the content of your lambda function in newValue
(as you don't call it in any other place):
def newValue(grp):
val = grp.Value
outl = abs((val - val.mean()) / val.std()) > stds
return val.mask(outl, val[~outl].mean())

Get Sum of Every Time Two Values Match

My googleing has failed me, I think my main issue is im unsure how to phrase the question (sorry about the crappy title). I'm trying to find the total each time 2 people vote the same way. Below you will see an example of how the data looks and the output I was looking for. I have a working solution but its very slow (see bottom) and was wondering if theres a better way to go about this.
This is how the data is shaped
----------------------------------
event person vote
1 a y
1 b n
1 c nv
1 d nv
1 e y
2 a n
2 b nv
2 c y
2 d n
2 e n
----------------------------------
This is the output im looking for
----------------------------------
Person a b c d e
a 2 0 0 1 2
b 0 2 0 0 0
c 0 0 2 1 0
d 1 0 1 2 1
e 2 0 0 1 2
----------------------------------
Working Code
df = df.pivot(index='event', columns='person', values='vote')
frame = pd.DataFrame(columns=df.columns, index=df.columns)
for person1, value in frame.iterrows():
for person2 in frame:
count = 0
for i, row in df.iterrows():
person1_votes = row[person1]
person2_votes = row[person2]
if person1_votes == person2_votes:
count += 1
frame.at[person1, person2] = count
Try look at your problem in different way
df=df.assign(key=1)
mergedf=df.merge(df,on=['event','key'])
mergedf['equal']=mergedf['vote_x'].eq(mergedf['vote_y'])
output=mergedf.groupby(['person_x','person_y'])['equal'].sum().unstack()
output
Out[1241]:
person_y a b c d e
person_x
a 2.0 0.0 0.0 1.0 2.0
b 0.0 2.0 0.0 0.0 0.0
c 0.0 0.0 2.0 1.0 0.0
d 1.0 0.0 1.0 2.0 1.0
e 2.0 0.0 0.0 1.0 2.0
#Wen-Ben already answered your question. It bases on the concept of finding all possibilities of pair-wise person and count those having same vote. Finding all pair-wise is cartesian product (cross join). You may read great post from #cs95 on cartesian product (CROSS JOIN) with pandas
In your problem, you count same vote per event, so it is cross joint per event. Therefore, you don't need adding helper key column as in #cs95 post. You may cross join directly on column event. After cross join, filter out those pair-wise person<->person having same vote using query. Finally, using crosstab to count those pair-wise.
Below is my solution:
df_match = df.merge(df, on='event').query('vote_x == vote_y')
pd.crosstab(index=df_match.person_x, columns=df_match.person_y)
Out[1463]:
person_y a b c d e
person_x
a 2 0 0 1 2
b 0 2 0 0 0
c 0 0 2 1 0
d 1 0 1 2 1
e 2 0 0 1 2

Is it possible to split a Pandas dataframe using groupby and merge each group with separate dataframes

I have a Pandas dataframe that contains a grouping variable. I would like to merge each group with other dataframes based on the contents of one of the columns. So, for example, I have a dataframe, dfA, which can be defined as:
dfA = pd.DataFrame({'a':[1,2,3,4,5,6],
'b':[0,1,0,0,1,1],
'c':['a','b','c','d','e','f']})
a b c
0 1 0 a
1 2 1 b
2 3 0 c
3 4 0 d
4 5 1 e
5 6 1 f
Two other dataframes, dfB and dfC, contain a common column ('a') and an extra column ('d') and can be defined as:
dfB = pd.DataFrame({'a':[1,2,3],
'd':[11,12,13]})
a d
0 1 11
1 2 12
2 3 13
dfC = pd.DataFrame({'a':[4,5,6],
'd':[21,22,23]})
a d
0 4 21
1 5 22
2 6 23
I would like to be able to split dfA based on column 'b' and merge one of the groups with dfB and the other group with dfC to produce an output that looks like:
a b c d
0 1 0 a 11
1 2 1 b 12
2 3 0 c 13
3 4 0 d 21
4 5 1 e 22
5 6 1 f 23
In this simplified version, I could concatenate dfB and dfC and merge with dfA without splitting into groups as shown below:
dfX = pd.concat([dfB,dfC])
dfA = dfA.merge(dfX,on='a',how='left')
print(dfA)
a b c d
0 1 0 a 11
1 2 1 b 12
2 3 0 c 13
3 4 0 d 21
4 5 1 e 22
5 6 1 f 23
However, in the real-world situation, the smaller dataframes will be generated from multiple different complex sources; generating the dataframes and combining into a single dataframe beforehand may not be feasible because there may be overlapping data on the column that will be used for merging the dataframes (but this will be avoided if the dataframe can be split based on the grouping variable). Is it possible to use Pandas groupby() method to do this instead? I was thinking of something like the following (which doesn't work, perhaps because I'm not combining the groups into a new dataframe correctly):
grouped = dfA.groupby('b')
for name, group in grouped:
if name == 0:
group = group.merge(dfB,on='a',how='left')
elif name == 1:
group = group.merge(dfC,on='a',how='left')
Any thoughts would be appreciated.
This will fix your code
l=[]
grouped = dfA.groupby('b')
for name, group in grouped:
if name == 0:
group = group.merge(dfB,on='a',how='left')
elif name == 1:
group = group.merge(dfC,on='a',how='left')
l.append(group)
pd.concat(l)
Out[215]:
a b c d
0 1 0 a 11.0
1 3 0 c 13.0
2 4 0 d NaN
0 2 1 b NaN
1 5 1 e 22.0
2 6 1 f 23.0

Group DataFrame, apply function with inputs then add result back to original

Can't find this question anywhere, so just try here instead:
What I'm trying to do is basically alter an existing DataFrame object using groupby-functionality, and a self-written function:
benchmark =
x y z field_1
1 1 3 a
1 2 5 b
9 2 4 a
1 2 5 c
4 6 1 c
What I want to do, is to groupby field_1, apply a function using specific columns as input, in this case columns x and y, then add back the result to the original DataFrame benchmark as a new column called new_field. The function itself is dependent on the value in field_1, i.e. field_1=a will yield a different result compared to field_1=b etc. (hence the grouping to start with).
Pseudo-code would be something like:
1. grouped_data = benchmark.groupby(['field_1'])
2. apply own_function to grouped_data; with inputs ('x', 'y', grouped_data)
3. add back result from function to benchmark as column 'new_field'
Thanks,
ALTERATION:
benchmark =
x y z field_1
1 1 3 a
1 2 5 b
9 2 4 a
1 2 5 c
4 6 1 c
Elaboration:
I also have a DataFrame separate_data containing separate values for x,
separate_data =
x a b c
1 1 3 7
2 2 5 6
3 2 4 4
4 2 5 9
5 6 1 10
that will need to be interpolated onto the existing benchmark DataFrame. Which column in separate_data that should be used for interpolation is dependent on column field_1 in benchmark (i.e. values in set (a,b,c) above). The interpolated value in the new column, is based on x-value in benchmark.
Result:
benchmark =
x y z field_1 field_new
1 1 3 a interpolate using separate_data with x=1 and col=a
1 2 5 b interpolate using separate_data with x=1 and col=b
9 2 4 a ... etc
1 2 5 c ...
4 6 1 c ...
Makes sense?
EDIT:
I think you need reshape separate_data first by set_index + stack, set index names by rename_axis and set name of Serie by rename.
Then is possible groupby by both levels and use some function.
Then join it to benchmark with default left join:
separate_data1 =separate_data.set_index('x').stack().rename_axis(('x','field_1')).rename('d')
print (separate_data1)
x field_1
1 a 1
b 3
c 7
2 a 2
b 5
c 6
3 a 2
b 4
c 4
4 a 2
b 5
c 9
5 a 6
b 1
c 10
Name: d, dtype: int64
If necessary use some function, mainly if some duplicates in pairs x with field_1 it return nice unique pairs:
def func(x):
#sample function
return x / 2 + x ** 2
separate_data1 = separate_data1.groupby(level=['x','field_1']).apply(func)
print (separate_data1)
x field_1
1 a 1.5
b 10.5
c 52.5
2 a 5.0
b 27.5
c 39.0
3 a 5.0
b 18.0
c 18.0
4 a 5.0
b 27.5
c 85.5
5 a 39.0
b 1.5
c 105.0
Name: d, dtype: float64
benchmark = benchmark.join(separate_data1, on=['x','field_1'])
print (benchmark)
x y z field_1 d
0 1 1 3 a 1.5
1 1 2 5 b 10.5
2 9 2 4 a NaN
3 1 2 5 c 52.5
4 4 6 1 c 85.5
I think you cannot use transform because multiple columns which are read together.
So use apply:
df1 = benchmark.groupby(['field_1']).apply(func)
And then for new column are multiple solutions, e.g. use join (default left join) or map.
Sample solution with both method is here.
Or is possible use flexible apply which can return new DataFrame with new column.
Try something like this:
groups = benchmark.groupby(benchmark["field_1"])
benchmark = benchmark.join(groups.apply(your_function), on="field_1")
In your_function you would create the new column using the other columns that you need, e.g. average them, sum them, etc.
Documentation for apply.
Documentation for join.
Here is a working example:
# Sample function that sums x and y, then append the field as string.
def func(x, y, z):
return (x + y).astype(str) + z
benchmark['new_field'] = benchmark.groupby('field_1')\
.apply(lambda x: func(x['x'], x['y'], x['field_1']))\
.reset_index(level = 0, drop = True)
Result:
benchmark
Out[139]:
x y z field_1 new_field
0 1 1 3 a 2a
1 1 2 5 b 3b
2 9 2 4 a 11a
3 1 2 5 c 3c
4 4 6 1 c 10c

Categories

Resources