put equation to calculate value for new column - python

I want to create a new column in my table by implementing equation, but there might be 2 possible equations for the new table.
(1) frequency = (total x 100) / hour
(2) frequency = (total x 1000000) / km_length
the table is similar to this:
type hour km_length total
A 1 - 1
B - 2 1
the calculation for "frequency" table would depend on which columns between hour and km_length that has value.
then, I expect the table will be like this:
type hour km_length total frequency
A 1 - 1 100
B - 2 1 500000
I have tried using np.nan_to_num before but it did not show the expected table I want.
is there anyway I can make it using python? Looking forward to any help
thankyou.

We can use np.where for assigning values based on a condition:
df[["hour", "km_length"]] = df[["hour", "km_length"]].apply(pd.to_numeric, errors="coerce")
df["frequency"] = np.where(
df["km_length"].isna(),
df["total"] * 100 / df["hour"],
df["total"] * 1_000_000 / df["km_length"]
)
type hour km_length total frequency
0 A 1.0 NaN 1 100.0
1 B NaN 2.0 1 500000.0

Make your values numeric then multiply. Because a missing value indicates with method to use and because division with NaN results in a NaN do both multiplications and use .fillna to determine the correct resulting value.
df[['hour', 'km_length']] = df[['hour', 'km_length']].apply(pd.to_numeric, errors='coerce')
s1 = df['total'].divide(df['hour']).multiply(100)
s2 = df['total'].divide(df['km_length']).multiply(10**6)
df['frequency'] = s1.fillna(s2)
type hour km_length total frequency
0 A 1.0 NaN 1 100.0
1 B NaN 2.0 1 500000.0

You can store the data in numpy array.
import numpy as np
table = np.array([['hour' , 'km_lenght' , 'total' , 'frequrncy']] #set the value of frequency as 0
for i in table:
try:
i[3] = (i[2]*100)/i[0]
except:
i[3] = (i[2]*1000000)/i[1]
print(table)
This should print the desired table.

Related

Compute ratio and fill pandas column with list of unique values for each group

I have a dataframe like as shown below
ID,desk_id,dateval,
1,123,21/11/2016
1,123,29/11/2016
1,456,21/12/2016
1,100,29/12/2016
2,318,11/12/2017
2,419,17/12/2017
2,nan,21/12/2017
2,nan,21/12/2017
2,393,28/11/2017
3,nan,21/11/2016
3,nan,21/11/2016
3,nan,21/11/2016
4,nan,11/8/2018
4,nan,16/8/2018
4,nan,21/8/2018
df = pd.read_clipboard(sep=',')
df['date_val'] = pd.to_datetime(df['date_val'])
df.sort_values(by=['ID','dateval'],inplace=True)
I would like to create three columns
a) desk_id_ratio = count(desk_id <> NA)/Total no of records for each ID. For ex: ID = 2 will have desk_id_ratio=60 because 3 out of 5 rows have desk_id <> NA
b) desk_id_list = store the unique desk_ids for each ID in a list.
c) avg_gap_days = compute the average difference in days between records for each ID. Hence, I sort the dataframe by dateval above to get positive value for mean difference in days
I was trying something like below
df['desk_ratio'] = (df.groupby(['ID'])['desk_id'].count().reset_index(drop=True)/df.groupby(['ID']).size().reset_index(drop=True))*100
df['previous_record'] = df.groupby(['ID'])['dateval'].shift()
df['days_bw_records'] = df['dateval'] - df['previous_record']
df['days_bw_records'] = df['days_bw_records'].apply(lambda x: x.days)
df['avg_gap_days'] = df.groupby(['ID'])['days_bw_records'].agg('mean').reset_index()
I expect my output to be like as below
You can first assign a column that flags non-NaN values in desk_id; then use groupby.agg to find the desired columns for each column. For desk_ratio, find the mean; for date_val, find the average gap in days; for desk_id, join the unique IDs.
out = (df.assign(desk_ratio=df['desk_id'].notna())
.groupby('ID').agg({'desk_ratio':'mean',
'date_val':lambda x: round(x.diff().dt.days.mean(), 2),
'desk_id':lambda x: ';'.join(str(int(el)) for el in x.unique() if el==el)})
.mul([100,1,1]).reset_index())
Output:
ID desk_ratio date_val desk_id
0 1 100.0 12.67 123;456;100
1 2 60.0 9.75 318;393;419
2 3 0.0 0.00
3 4 0.0 42.00

Pandas - New column based on the value of another column N rows back, when N is stored in a column

I have a pandas dataframe with example data:
idx price lookback
0 5
1 7 1
2 4 2
3 3 1
4 7 3
5 6 1
Lookback can be positive or negative but I want to take the absolute value of it for how many rows back to take the value from.
I am trying to create a new column that contains the value of price from lookback + 1 rows ago, for example:
idx price lookback lb_price
0 5 NaN NaN
1 7 1 NaN
2 4 2 NaN
3 3 1 7
4 7 3 5
5 6 1 3
I started with what felt like the most obvious way, this did not work:
df['sbc'] = df['price'].shift(dataframe['lb'].abs() + 1)
I then tried using a lambda, this did not work but I probably did it wrong:
sbc = lambda c, x: pd.Series(zip(*[c.shift(x+1)]))
df['sbc'] = sbc(df['price'], df['lb'].abs())
I also tried a loop (which was extremely slow, but worked) but I am sure there is a better way:
lookback = np.nan
for i in range(len(df)):
if df.loc[i, 'lookback']:
if not np.isnan(df.loc[i, 'lookback']):
lookback = abs(int(df.loc[i, 'lookback']))
if not np.isnan(lookback) and (lookback + 1) < i:
df.loc[i, 'lb_price'] = df.loc[i - (lookback + 1), 'price']
I have seen examples using lambda, df.apply, and perhaps Series.map but they are not clear to me as I am quite a novice with Python and Pandas.
I am looking for the fastest way I can do this, if there is a way without using a loop.
Also, for what its worth, I plan to use this computed column to create yet another column, which I can do as follows:
df['streak-roc'] = 100 * (df['price'] - df['lb_price']) / df['lb_price']
But if I can combine all of it into one really efficient way of doing it, that would be ideal.
Solution!
Several provided solutions worked great (thank you!) but all needed some small tweaks to deal with my potential for negative numbers and that it was a lookback + 1 not - 1 and so I felt it was prudent to post my modifications here.
All of them were significantly faster than my original loop which took 5m 26s to process my dataset.
I marked the one I observed to be the fastest as accepted as I improving the speed of my loop was the main objective.
Edited Solutions
From Manas Sambare - 41 seconds
df['lb_price'] = df.apply(
lambda x: df['price'][x.name - (abs(int(x['lookback'])) + 1)]
if not np.isnan(x['lookback']) and x.name >= (abs(int(x['lookback'])) + 1)
else np.nan,
axis=1)
From mannh - 43 seconds
def get_lb_price(row, df):
if not np.isnan(row['lookback']):
lb_idx = row.name - (abs(int(row['lookback'])) + 1)
if lb_idx >= 0:
return df.loc[lb_idx, 'price']
else:
return np.nan
df['lb_price'] = dataframe.apply(get_lb_price, axis=1 ,args=(df,))
From Bill - 18 seconds
lookup_idxs = df.index.values - (abs(df['lookback'].values) + 1)
valid_lookups = lookup_idxs >= 0
df['lb_price'] = np.nan
df.loc[valid_lookups, 'lb_price'] = df['price'].to_numpy()[lookup_idxs[valid_lookups].astype(int)]
By getting the row's index inside of the df.apply() call using row.name, you can generate the 'lb_price' data relative to which row you are currently on.
%time
df.apply(
lambda x: df['price'][x.name - int(x['lookback'] + 1)]
if not np.isnan(x['lookback']) and x.name >= x['lookback'] + 1
else np.nan,
axis=1)
# > CPU times: user 2 µs, sys: 0 ns, total: 2 µs
# > Wall time: 4.05 µs
FYI: There is an error in your example as idx[5]'s lb_price should be 3 and not 7.
Here is an example which uses a regular function
def get_lb_price(row, df):
lb_idx = row.name - abs(row['lookback']) - 1
if lb_idx >= 0:
return df.loc[lb_idx, 'price']
else:
return np.nan
df['lb_price'] = df.apply(get_lb_price, axis=1 ,args=(df,))
Here's a vectorized version (i.e. no for loops) using numpy array indexing.
lookup_idxs = df.index.values - df['lookback'].values - 1
valid_lookups = lookup_idxs >= 0
df['lb_price'] = np.nan
df.loc[valid_lookups, 'lb_price'] = df.price.to_numpy()[lookup_idxs[valid_lookups].astype(int)]
print(df)
Output:
price lookback lb_price
idx
0 5 NaN NaN
1 7 1.0 NaN
2 4 2.0 NaN
3 3 1.0 7.0
4 7 3.0 5.0
5 6 1.0 3.0
This solution loops of the values ot the column lockback and calculates the index of the wanted value in the column price which I store as a list.
The rule it, that the lockback value has to be a number and that the wanted index is not smaller than 0.
new = np.zeros(df.shape[0])
price = df.price.values
for i, lookback in enumerate(df.lookback.values):
# lookback has to be a number and the index is not allowed to be less than 0
# 0<i-lookback is equivalent to 0<=i-(lookback+1)
if lookback!=np.nan and 0<i-lookback:
new[i] = price[int(i-(lookback+1))]
else:
new[i] = np.nan
df['lb_price'] = new

How to create a dataframe that only selects rows that have value more than avg +/* standard deviation in Pandas?

This is the code that I have so far, but I am just not sure if I should just join the dataframes after figuring out the standard deviation and average. Where I am currently stuck is selecting the rows that have value more or less than avg +/1 std. I just do not know how I would be able to iterate through each column and do this? I thought about a for loop, but wasn't exactly sure how to go about it.
import pandas_datareader.data as web
import datetime as date
fromDate ="2014-01-02"
toDate = "2016-01-02"
dfSixMo = web.DataReader('DGS6MO','fred',fromDate,toDate)
dfOneYear = web.DataReader('DGS1','fred',fromDate,toDate)
dfFiveYear = web.DataReader('DGS5','fred',fromDate,toDate)
dfTenYear = web.DataReader('DGS10','fred',fromDate,toDate)
dfJoin1 = dfSixMo.join(dfOneYear,how = 'inner')
dfJoin2 = dfFiveYear.join(dfTenYear,how='inner')
dfFinal = dfJoin1.join(dfJoin2,how='inner')
print(dfFinal)
mean = dfFinal.mean()
print('\nMean:')
print(mean)
StDev = dfFinal.std()
print('\n Standard Deviation:')
print(StDev)
IIUC this is what you want:
#setup
df = pd.DataFrame(np.random.randint(0,10,(3,3)), columns = list('abc'))
# a b c
#0 3 2 8
#1 0 6 7
#2 8 3 9
mean = df.mean()
std = df.std()
df[((mean-std < df) & (df< mean+std)).all(1)]
# a b c
#0 3 2 8

Cumulative sum of a pandas column until a maximum value is met, and average adjacent rows

I'm a biology student who is fairly new to python and was hoping someone might be able to help with a problem I have yet to solve
With some subsequent code I have created a pandas dataframe that looks like the example below:
Distance. No. of values Mean rSquared
1 500 0.6
2 80 0.3
3 40 0.4
4 30 0.2
5 50 0.2
6 30 0.1
I can provide my previous code to create this dataframe, but I didn't think it was particularly relevant.
I need to sum the number of values column until I achieve a value >= 100; and then combine the data of the rows of the adjacent columns, taking the weighted average of the distance and mean r2 values, as seen in the example below
Mean Distance. No. Of values Mean rSquared
1 500 0.6
(80*2+40*3)/120 (80+40) = 120 (80*0.3+40*0.4)/120
(30*4+50*5+30*6)/110 (30+50+30) = 110 (30*0.2+50*0.2+30*0.1)/110
etc...
I know pandas has it's .cumsum function, which I might be able to implement into a for loop with an if statement that checks the upper limit and resets the sum back to 0 when it is greater than or equal to the upper limit. However, I haven't a clue how to average the adjacent columns.
Any help would be appreciated!
You can use this code snippet to solve your problem.
# First, compute some weighted values
df.loc[:, "weighted_distance"] = df["Distance"] * df["No. of values"]
df.loc[:, "weighted_mean_rSquared"] = df["Mean rSquared"] * df["No. of values"]
min_threshold = 100
indexes = []
temp_sum = 0
# placeholder for final result
final_df = pd.DataFrame()
columns = ["Distance", "No. of values", "Mean rSquared"]
# reseting index to make the 'df' usable in following output
df = df.reset_index(drop=True)
# main loop to check and compute the desired output
for index, _ in df.iterrows():
temp_sum += df.iloc[index]["No. of values"]
indexes.append(index)
# if the sum exceeds 'min_threshold' then do some computation
if temp_sum >= min_threshold:
temp_distance = df.iloc[indexes]["weighted_distance"].sum() / temp_sum
temp_mean_rSquared = df.iloc[indexes]["weighted_mean_rSquared"].sum() / temp_sum
# create temporary dataframe and concatenate with the 'final_df'
temp_df = pd.DataFrame([[temp_distance, temp_sum, temp_mean_rSquared]], columns=columns)
final_df = pd.concat([final_df, temp_df])
# reset the variables
temp_sum = 0
indexes = []
Numpy has a function numpy.frompyfunc You can use that to get the cumulative value based on a threshold.
Here's how to implement it. With that, you can then figure out the index when the value goes over the threshold. Use that to calculate the Mean Distance and Mean rSquared for the values in your original dataframe.
I also leveraged #sujanay's idea of calculating the weighted values first.
c = ['Distance','No. of values','Mean rSquared']
d = [[1,500,0.6], [2,80,0.3], [3,40,0.4],
[4,30,0.2], [5,50,0.2], [6,30,0.1]]
import pandas as pd
import numpy as np
df = pd.DataFrame(d,columns=c)
#calculate the weighted distance and weighted mean squares first
df.loc[:, "w_distance"] = df["Distance"] * df["No. of values"]
df.loc[:, "w_mean_rSqrd"] = df["Mean rSquared"] * df["No. of values"]
#use numpy.frompyfunc to setup the threshold condition
sumvals = np.frompyfunc(lambda a,b: a+b if a <= 100 else b,2,1)
#assign value to cumvals based on threshold
df['cumvals'] = sumvals.accumulate(df['No. of values'], dtype=np.object)
#find out all records that have >= 100 as cumulative values
idx = df.index[df['cumvals'] >= 100].tolist()
#if last row not in idx, then add it to the list
if (len(df)-1) not in idx: idx += [len(df)-1]
#iterate thru the idx for each set and calculate Mean Distance and Mean rSquared
i = 0
for j in idx:
df.loc[j,'Mean Distance'] = (df.iloc[i:j+1]["w_distance"].sum() / df.loc[j,'cumvals']).round(2)
df.loc[j,'New Mean rSquared'] = (df.iloc[i:j+1]["w_mean_rSqrd"].sum() / df.loc[j,'cumvals']).round(2)
i = j+1
print (df)
The output of this will be:
Distance No. of values ... Mean Distance New Mean rSquared
0 1 500 ... 1.00 0.60
1 2 80 ... NaN NaN
2 3 40 ... 2.33 0.33
3 4 30 ... NaN NaN
4 5 50 ... NaN NaN
5 6 30 ... 5.00 0.17
If you want to extract only the records that are non NaN, you can do:
final_df = df[df['Mean Distance'].notnull()]
This will result in:
Distance No. of values ... Mean Distance New Mean rSquared
0 1 500 ... 1.00 0.60
2 3 40 ... 2.33 0.33
5 6 30 ... 5.00 0.17
I looked up BEN_YO's implementation of numpy.frompyfunc. The original SO post can be found here. Restart cumsum and get index if cumsum more than value
If you figure out the grouping first, pandas groupby-functionality will do a lot of the remaining work for you. A loop is appropriate to get the grouping (unless somebody has a clever one-liner):
>>> groups = []
>>> group = 0
>>> cumsum = 0
>>> for n in df["No. of values"]:
... if cumsum >= 100:
... cumsum = 0
... group = group + 1
... cumsum = cumsum + n
... groups.append(group)
>>>
>>> groups
[0, 1, 1, 2, 2, 2]
Before doing the grouped operations you need to use the No. of values information to get the weighting in:
df[["Distance.", "Mean rSquared"]] = df[["Distance.", "Mean rSquared"]].multiply(df["No. of values"], axis=0)
Now get the sums like this:
>>> sums = df.groupby(groups)["No. of values"].sum()
>>> sums
0 500
1 120
2 110
Name: No. of values, dtype: int64
And finally the weighted group averages like this:
>>> df[["Distance.", "Mean rSquared"]].groupby(groups).sum().div(sums, axis=0)
Distance. Mean rSquared
0 1.000000 0.600000
1 2.333333 0.333333
2 5.000000 0.172727

How to calculate pandas DF values based on grouped column

I'm relatively new to Pandas dataframes and I have to do simple calculation, but so far I haven't found a good way to go about it.
Basically what I have is:
type group amount
1 A real 55
2 A fake 12
3 B real 610
4 B fake 23
5 B real 45
Now, I have to add a new column that would show the percentage of fakes in type total. So the simple formula for this table would be for A 12 / (55 + 12) * 100 and for B 23 / (610 + 23 + 45) * 100 and the table should look something like this:
type group amount percentage
1 A real 55
2 A fake 12 17.91
3 B real 610
4 B fake 23
5 B real 45 3.39
I know about groupby statements and basically all the components I need for this (I guess...), but can't figure out how to combine to get this result.
df['percentage'] = df.amount \
/ df.groupby(['type']) \
.amount.transform('sum').loc[df.group.eq('fake')]).fillna('')
df
If handling multiple fake in group per type. We can be a bit more careful. I'll set the index to preserve the type and group columns while I transform.
c = ['type', 'group']
d1 = df.set_index(c, append=True)
d1.amount /= d1.groupby(level=['type']).amount.transform('sum')
d1.reset_index(c)
From here, you can choose to leave that alone or consolidate the group column.
d1.groupby(level=c).sum().reset_index()
Try this out:
percentage = {}
for type in df.type.unique():
numerator = df[(df.type == type) & (df.group == 'fake')].amount.sum()
denominator = df[(df.type == type)].amount.sum()
percentage[type] = numerator / denominator * 100
df['percentage'] = list(df.type.map(percentage))
If you wanted to make sure you accounted for multiple fake groups per type you can do the following
type_group_total = df.groupby(['type', 'group']).transform('sum')
type_total = df.groupby('type')[['amount']].transform('sum')
df['percentage'] = type_group_total / type_total
Output
type group amount percentage
0 A real 55 0.820896
1 A fake 12 0.179104
2 B real 610 0.899705
3 B fake 23 0.100295
4 B fake 45 0.100295

Categories

Resources