I have a flat DataFrame like this:
And i would like to convert this into a DataFrame like this:
For every test (T) for every version (Version) i would like to sum up the counts of answers mapped on a given likert scale (i cut it down to 3 entries for demonstration purposes) as percentages.
The whole set of likert scale values for every combination of T and Version should sum up to 100 Percent.
likert = {
'Agree': 1,
'Undecided': 2,
'Disagree': 3,
}
How is this possible?
Thanks for your help!
Probably not the most elegant solution but I think this achieves your goal. Suppose your dataframe is named df (I randomly sampled between the scales so my df isn't exactly what you described):
res = df.melt(id_vars=['T', 'Version'], value_vars=['Q1', 'Q2'], value_name='Scale')
This transforms your dataframe to long format:
# T Version variable Scale
# 0 1 A Q1 Undecided
# 1 1 A Q1 Disagree
# 2 1 A Q1 Undecided
# 3 1 A Q1 Agree
Then you want to calculate the size of every combination of your variables, which can be accomplished the following way:
res = res.groupby(['T', 'Version', 'Scale', 'variable']).size()
Which yields:
# T Version Scale variable
# 1 A Agree Q1 2
# Q2 1
# Disagree Q2 3
# Undecided Q1 2
# B Agree Q1 1
Then, to move Q1 and Q2 to the columns, you unstack the last index level like so:
res = res.unstack(level=-1).fillna(0)
# variable Q1 Q2
# T Version Scale
# 1 A Agree 2.0 1.0
# Disagree 0.0 3.0
# Undecided 2.0 0.0
Finally, to compute the percent for each combination of the first two index levels:
res = res.groupby(level=[0, 1]).apply(lambda x: 100. * x / x.sum())
Which gives the desired result:
# variable Q1 Q2
# T Version Scale
# 1 A Agree 50.000000 25.000000
# Disagree 0.000000 75.000000
# Undecided 50.000000 0.000000
# B Agree 33.333333 0.000000
# Disagree 66.666667 66.666667
Related
I'm trying to create a new df from race_dbs that's grouped by 'horse_id' showing the number of times 'place' = 1 as well as the total number of times that 'horse_id' occurs.
Some background on the dataset if it's helpful;
race_dbs contains horse race data. There are 12 horses in a race, for each is shown their odds, fire, place, time, and gate number.
What I'm trying to achieve from this code is the calculation of win rates for each horse.
A win is denoted by 'place' = 1
Total race count will be calculated by how many times a particular 'horse_id' occurs in the db.
race_dbs
race_id
horse_id
odds
fire
place
horse_time
gate
V14qANzi
398807
NaN
0
1
72.0191
7
xeieZak
191424
NaN
0
8
131.3010
10
xeieZak
139335
NaN
0
1
131.3713
9
xeieZak
137195
NaN
0
11
131.6310
11
xeieZak
398807
NaN
0
12
131.7886
2
...
...
..
..
...
...
..
From this simple table the output would look like, but please bear in mind my dataset is very large, containing 12882353 rows in total.
desired output
horse_id
wins
races
win rate
398807
1
2
50%
191424
0
1
0%
139335
1
1
100%
137195
0
1
0%
...
..
..
...
It should be noted that I'm a complete coding beginner so forgive me if this is an easy solve.
I have tried to use the groupby and lambda pandas functions but I am struggling to combine both functions, and believe there will be a much simpler way.
import pandas as pd
race_db = pd.read_csv('horse_race_data_db.csv')
race_db_2 = pd.read_csv('2_horse_race_data.csv')
frames = [race_db, race_db_2]
race_dbs = pd.concat(frames, ignore_index=True, sort=False)
race_dbs_horse_wins = race_dbs.groupby('horse_id')['place'].apply(lambda x: x[x == 1].count())
race_dbs_horse_sums = race_dbs.groupby('horse_id').aggregate({"horse_id":['sum']})
Thanks for the help!
For count Trues values create helper boolean column and aggregate sum, for win rate aggregate mean and for count use GroupBy.size in named aggregations by GroupBy.agg:
out = (race_dbs.assign(no1 = race_dbs['place'].eq(1))
.groupby('horse_id', sort=False, as_index=False)
.agg(**{'wins':('no1','sum'),
'races':('horse_id','size'),
'win rate':('no1','mean')}))
print (out)
horse_id wins races win rate
0 398807 1 2 0.5
1 191424 0 1 0.0
2 139335 1 1 1.0
3 137195 0 1 0.0
can you try this way:
Example code
import pandas as pd
import numpy as np
new_technologies= {
'Courses':["Python","Java","Python","Ruby","Ruby"],
'Fees' :[22000,25000,23000,24000,26000],
'Duration':['30days','50days','30days', '30days', '30days']
}
print('new_technologies:',new_technologies)
df = pd.DataFrame(new_technologies)
print('df:',df)
#calculate precentage of aggregated functions
df2 = df.groupby(['Courses', 'Fees']).agg({'Fees': 'sum'})
print(df2)
# Percentage by lambda and DataFrame.apply() method.
df3 = df2.groupby(level=0).apply(lambda x:100 * x / float(x.sum()))
print(df3)
output:
Consider the following dataframe
df = pd.DataFrame()
df['Amount'] = [13,17,31,48]
I want to calculate for each row the std of the previous 2 values of the column "Amount". For example:
For the third row, the value should be the std of 17 and 13 (which is 2).
For the fourth row, the value should be the std of 31 and 17 (which is 7).
This is what I did:
df['std previous 2 weeks'] = df['Amount'].shift(1).rolling(2).std()
But this is not working. I thought that my problem was an index problem. But this works perfectly with the sum method.
df['total amount of previous 2 weeks'] = df['Amount'].shift(1).rolling(2).sum()
PD : I know that this can be done in some other ways but I want to know the reason for why this does not work (and how to fix it).
You could shift after rolling.std. Also the degrees of freedom is 1 by default, it seems you want it to be 0.
df['Stdev'] = df['Amount'].rolling(2).std(ddof=0).shift()
Output:
Amount Stdev
0 13 NaN
1 17 NaN
2 31 2.0
3 48 7.0
df1 = pd.DataFrame({"DEPTH":[0.5, 1, 1.5, 2, 2.5],
"POROSITY":[10, 22, 15, 30, 20],
"WELL":"well 1"})
df2 = pd.DataFrame({"Well":"well 1",
"Marker":["Fm 1","Fm 2"],
"Depth":[0.7, 1.7]})
Hello everyone. I have two dataframes and I would like to create a new column on df1, for example: df1["FORMATIONS"], with information from df2["Marker"] values based on depth limits from df2["Depth"] and df1["DEPTH"].
So, for example, if df2["Depth"] = 1.7, then all samples in df1 with df1["DEPTH"] > 1.7 should be labelled as "Fm 2" in this new column df1["FORMATIONS"].
And the final dataframe df1 should look like this:
DEPTH POROSITY WELL FORMATIONS
0.5 10 well 1 nan
1 22 well 1 Fm 1
1.5 15 well 1 Fm 1
2 30 well 1 Fm 2
2.5 20 well 1 Fm 2
Anyone could help me?
What you're doing here is transforming continuous data into categorical data. There are many ways to do this with pandas, but one of the better known ways is using pandas.cut.
When specifying the bins argument, you need to add float(inf) to the end of the list, to represent that the last bin goes to infinity.
df1["FORMATIONS"] = pd.cut(df1.DEPTH, list(df2.Depth) + [float('inf')], labels=df2.Marker)
df1 will now be:
Use pandas.merge_asof:
NB. the columns used for the merge need to be sorted first
pd.merge_asof(df1,
df2[['Marker', 'Depth']].rename(columns={'Marker': 'Formations'}),
left_on='DEPTH', right_on='Depth')
output:
DEPTH POROSITY WELL Formations Depth
0 0.5 10 well 1 NaN NaN
1 1.0 22 well 1 Fm 1 0.7
2 1.5 15 well 1 Fm 1 0.7
3 2.0 30 well 1 Fm 2 1.7
4 2.5 20 well 1 Fm 2 1.7
I have a column that I'm trying to smooth out the results. Most of the data creates a smooth chart but sometimes I get a random spike. I want to reduce the impact of the spike.
My thought was to take the outlier and just make it the mean of the values between it but I'm struggling and not getting the result I want.
Here's what I'm doing right now:
df = pd.DataFrame(np.random.randint(0,100,size=(5, 1)), columns=list('A'))
def aDetection(inputs):
median = inputs["A"].median()
std = inputs["A"].std()
outliers = (inputs["A"] - median).abs() > std
print("outliers")
print(outliers)
inputs[outliers]["A"] = np.nan #this isn't working.
inputs[outliers] = np.nan #works but wipes out entire row
inputs['A'].fillna(median, inplace=True)
print("modified:")
print(inputs)
print("original")
print(df)
aDetection(df)
original
A
0 4
1 86
2 40
3 99
4 97
outliers
0 True
1 False
2 True
3 False
4 False
Name: A, dtype: bool
modified:
A
0 86.0
1 86.0
2 86.0
3 99.0
4 97.0
For one, it seems to change all rows not just the single column. But the bigger problem is all the outliers in my example are using 86. I realize this is because I set the mean for the entire column, but I would like the mean between the previous column with the missing data.
For a single column, you can do your task with the following one-liner
(for readability folded into 2 lines):
df.A = df.A.mask((df.A - df.A.median()).abs() > df.A.std(),
pd.concat([df.A.shift(), df.A.shift(-1)], axis=1).mean(axis=1))
Details:
(df.A - df.A.median()).abs() > df.A.std() - computes outliers.
df.A.shift() - computes a Series of previous values.
df.A.shift(-1) - computes a Series of following values.
pd.concat(...) - creates a DataFrame from both the above Series.
mean(axis=1) - computes means by rows.
mask(...) - takes original values of A column for non-outliers
and the value from concat for outliers.
The result is:
A
0 86.0
1 86.0
2 92.5
3 99.0
4 97.0
If you want to apply this mechanism to all columns of your DataFrame,
then:
Change the above code to a function:
def replOutliers(col):
return col.mask((col - col.median()).abs() > col.std(),
pd.concat([col.shift(), col.shift(-1)], axis=1).mean(axis=1))
Apply it (to each column):
df = df.apply(replOutliers)
This is my first question on Stack Overflow, please let me know how I can help you help me if my question is unclear.
Goal: Use Python and Pandas to Outer join (or merge) Data Sets containing different experimental trials where the "x" axis of each trial is extremely similar but has some deviations. Most importantly, the "x" axis increases, hits a maximum and then decreases, often overlapping with previously existing "x" points.
Problem: When I go to join/merge the datasets on "x", the "x" column is sorted, messing up the order of the collected data and making it impossible to plot it correctly.
Here is a small example of what I am trying to do:
Wouldn't let me add pictures because I am new. Here is the code to generate these example data sets.
Data Sets :
Import:
import numpy as np
import pandas as pd
import random as rand
Code :
T1 = {'x':np.array([1,1.5,2,2.5,3,3.5,4,5,2,1]),'y':np.array([10000,8500,7400,6450,5670,5100,4600,4500,8400,9000]),'z':np.array(rand.sample(range(0,10000),10))}'
T2 = {'x':np.array([1,2,3,4,5,6,7,2,1.5,1]),'y':np.array([10500,7700,5500,4560,4300,3900,3800,5400,8400,8800]),'z':np.array(rand.sample(range(0,10000),10))}
Trial1 = pd.DataFrame(T1)
Trial2 = pd.DataFrame(T2)
Attempt to Merge/Join:
WomboCombo = Trial1.join(Trial2,how='outer',lsuffix=1,rsuffix=2, on='x')
WomboCombo2 = pd.merge(left=Trial1, right= Trial2, how = 'outer', left
Attempt to split into two parts, increasing and decreasing part (manually found row number where data "x" starts decreasing):
Trial1Inc = Trial1[0:8]
Trial2Inc = Trial2[0:7]
Result - Merge works well, join messes with the "x" column, not sure why:
Trial1Inc.merge(Trial2Inc,on='x',how='outer', suffixes=[1,2])
Incrementing section Merge Result
Trial1Inc.join(Trial2Inc,on='x',how='outer', lsuffix=1,rsuffix=2)
Incrementing section Join Result
Hopefully my example is clear, the "x" column in Trial 1 increases until 5, then decreases back towards 0. In Trial 2, I altered the test a bit because I noticed that I needed data at a slightly higher "x" value. Trial 2 Increases until 7 and then quickly decreases back towards 0.
My end goal is to plot the average of all y values (where there is overlap between the trials) against the corresponding x values.
If there is overlap I can add error bars. Pandas is almost perfect for what I am trying to do because an Outer join adds null values where there is no overlap and is capable of horizontally concatenating the two trials when there is overlap.
All thats left now is to figure out how to join on the "x" column but maintain its order of increasing values and then decreasing values. The reason it is important for me to first increase "x" and then decrease it is because when looking at the "y" values, it seems as though the initial "y" value at a given "x" is greater than the "y" value when "x" is decreasing (E.G. in trial 1 when x=1, y=10000, however, later in the trial when we come back to x=1, y=9000, this trend is important. When Pandas sorts the column before merging, instead of there being a clean curve showing a decrease in "y" as "x" increases and then the reverse, there are vertical downward jumps at any point where the data was joined.
I would really appreciate any help with either:
A) a perfect solution that lets me join on "x" when "x" contains duplicates
B) an efficient way to split the data sets into increasing "x" and decreasing "x" so that I can merge the increasing and decreasing sections of each trial separately and then vertically concat them.
Hopefully I did an okay job explaining the problem I would like to solve. Please let me know if I can clarify anything,
Thanks for the help!
I think #xyzjayne idea of splitting the dataframe is a great idea.
Splitting Trial1 and Trial2:
# index of max x value in Trial2
t2_max_index = Trial2.index[Trial2['x'] == Trial2['x'].max()].tolist()
# split Trial2 by max value
trial2_high = Trial2.loc[:t2_max_index[0]].set_index('x')
trial2_low = Trial2.loc[t2_max_index[0]+1:].set_index('x')
# index of max x value in Trial1
t1_max_index = Trial1.index[Trial1['x'] == Trial1['x'].max()].tolist()
# split Trial1 by max vlaue
trial1_high = Trial1.loc[:t1_max_index[0]].set_index('x')
trial1_low = Trial1.loc[t1_max_index[0]+1:].set_index('x')
Once we split the dataframes we join the highers together and the lowers together:
WomboCombo_high = trial1_high.join(trial2_high, how='outer', lsuffix='1', rsuffix='2', on='x').reset_index()
WomboCombo_low = trial1_low.join(trial2_low, how='outer', lsuffix='1', rsuffix='2', on='x').reset_index()
We now combine them toegther to have one dataframe WomboCombo
WomboCombo = WomboCombo_high.append(WomboCombo_low)
OUTPUT:
x y1 z1 y2 z2
0 1.0 10000.0 3425.0 10500.0 3061.0
1 1.5 8500.0 5059.0 NaN NaN
2 2.0 7400.0 2739.0 7700.0 7090.0
3 2.5 6450.0 9912.0 NaN NaN
4 3.0 5670.0 2099.0 5500.0 1140.0
5 3.5 5100.0 9637.0 NaN NaN
6 4.0 4600.0 7581.0 4560.0 9584.0
7 5.0 4500.0 8616.0 4300.0 3940.0
8 6.0 NaN NaN 3900.0 5896.0
9 7.0 NaN NaN 3800.0 6211.0
0 2.0 8400.0 3181.0 5400.0 9529.0
2 1.5 NaN NaN 8400.0 3260.0
1 1.0 9000.0 4280.0 8800.0 8303.0
One possible solution is to give you trial rows specific IDs an then merge on the IDs. Should keep the x values from being sorted.
Here's what I was trying out, but it doesn't address varying numbers of data points. I like gym-hh's answer, though it's not clear to me that you wanted two columns of y,z pairs. So you could combine his ideas and this code to get what you need.
Trial1['index1'] = Trial1.index
Trial2['index1'] = Trial2.index
WomboCombo = Trial1.append(Trial2)
WomboCombo.sort_values(by=['index1'],inplace=True)
WomboCombo
Output:
x y z index1
0 1.0 10000 7148 0
0 1.0 10500 2745 0
1 1.5 8500 248 1
1 2.0 7700 9505 1
2 2.0 7400 6380 2
2 3.0 5500 3401 2
3 2.5 6450 6183 3
3 4.0 4560 5281 3
4 3.0 5670 99 4
4 5.0 4300 8864 4
5 3.5 5100 5132 5
5 6.0 3900 7570 5
6 4.0 4600 9951 6
6 7.0 3800 7447 6
7 2.0 5400 3713 7
7 5.0 4500 3863 7
8 1.5 8400 8776 8
8 2.0 8400 1592 8
9 1.0 9000 2167 9
9 1.0 8800 782 9