Increase value of several rows based on condition fulfilling all rows - python

I have a pandas dataframe with three columns and want to multiply/increase the float numbers of each row by the same amount until the sum of all three cells (one row) fulfils the critera (value equal or greater than 0.9)
df = pd.DataFrame({'A':[0.03, 0.0, 0.4],
'B': [0.1234, 0.4, 0.333],
'C': [0.5, 0.4, 0.0333]})
Outcome:
The different cells in each row were multiplied so that the sum of all three cells of each row is 0.9 (The sum of each row is not exactly 0.9 as I tried to come close with simple multiplication, hence the actual outcome would get to 0.9). It is important that the cells which are 0 would stay 0.
print (df)
A B C
0 0.0414 0.170292 0.690000
1 0.0000 0.452000 0.452000
2 0.4720 0.392940 0.039294

You can take sum on axis=1 and subtract with 0.9 ,then divide with df.shape[1] to add it back:
df.add((0.9-df.sum(axis=1))/df.shape[1],axis=0)
A B C
0 0.112200 0.205600 0.582200
1 0.033333 0.433333 0.433333
2 0.444567 0.377567 0.077867

You want to apply a scaling function along the rows:
def scale(xs, target=0.9):
"""Scale the features such that their sum equals the target."""
xs_sum = xs.sum()
if xs_sum < target:
return xs * (target / xs_sum)
else:
return xs
df.apply(scale), axis=1)
For example:
df = pd.DataFrame({'A':[0.03, 0.0, 0.4],
'B': [0.1234, 0.4, 0.333],
'C': [0.5, 0.4, 0.0333]})
df.apply(scale, axis=1)
Should give:
A B C
0 0.041322 0.169972 0.688705
1 0.000000 0.450000 0.450000
2 0.469790 0.391100 0.039110
The rows of that dataframe all sum to 0.9:
df.apply(scale), axis=1).sum(axis=1)
0 0.9
1 0.9
2 0.9
dtype: float64

Related

select rows in group by dataframe before the row which not satisfies a condition (python)

I have a dataframe with some features. I want to group by 'id' feature. Then for each group I want to identify the row which has 'speed' feature value greater than a threshold and select all the rows before this one.
For example, my threshold is 1.5 for 'speed' feature and my input is:
id
speed
...
1
1.2
...
1
1.9
...
1
1.0
...
5
0.9
...
5
1.3
...
5
3.5
...
5
0.4
...
And my desired output is:
id
speed
...
1
1.2
...
5
0.9
...
5
1.3
...
This should get you the desired results:
# Create sample data
df = pd.DataFrame({'id':[1, 1, 1, 5, 5, 5, 5],
'speed':[1.2, 1.9, 1.0, 0.9, 1.3, 9.5, 0.4]
})
df
output:
id speed
0 1 1.2
1 1 1.9
2 1 1.0
3 5 0.9
4 5 1.3
5 5 9.5
6 5 0.4
ther = 1.5
s = df.speed.shift(-1).ge(ther)
df[s]
Output:
id speed
0 1 1.2
4 5 1.3
It took me an hour to figure out but I got what you need. You need to REVERSE the dataframe and use .cumsum() (cumulative sum) in the groupbyed id's to find the values after the speed threshold you set. Then drop the speeds more than threshold, along with rows that do not satisfy the condition. Finally, reverse back the dataframe:
# Create sample data
df = pd.DataFrame({'id':[1, 1, 1, 5, 5, 5, 5],
'speed':[1.2, 1.9, 1.0, 0.9, 1.3, 9.5, 0.4]
})
# Reverse the dataframe
df = df.iloc[::-1]
thre = 1.5
# Find rows with speed more than threshold
df = df.assign(ge=df.speed.ge(thre))
# Groupby and cumsum to get the rows that are after the threshold in with same id
df.insert(0, 'beforethre', df.groupby('id')['ge'].cumsum())
# Drop speed more than threshold
df['ge'] = df['ge'].replace(True, np.nan)
# Drop rows that don't have any speed more than threshold or after threshold
df['beforethre'] = df['beforethre'].replace(0, np.nan)
df = df.dropna(axis=0).drop(['ge', 'beforethre'], axis=1)
# Reverse back the dataframe
df = df.iloc[::-1]
# Viola!
df
Output:
id speed
0 1 1.2
3 5 0.9
4 5 1.3

Cumulative product by group without groups' last row in pandas

I have a simple dataframe as the following:
n_obs = 3
dd = pd.DataFrame({
'WTL_exploded': [0, 1, 2]*n_obs,
'hazard': [0.3, 0.4, 0.5, 0.2, 0.8, 0.9, 0.6,0.6,0.65],
}, index=[1,1,1,2,2,2,3,3,3])
dd
I want to group by the index and get the cumulative product of the hazard column. However, I want to multiply all but the last element of each group.
Desired output:
index
hazard
1
0.3
1
0.12
2
0.2
2
0.16
3
0.6
3
0.36
How can I do that?
You can use:
out = dd.groupby(level=0, group_keys=False).apply(lambda x: x.cumprod().iloc[:-1])
Or:
out = dd.groupby(level=0).apply(lambda x: x.cumprod().iloc[:-1]).droplevel(1)
output:
WTL_exploded hazard
1 0 0.30
1 0 0.12
2 0 0.20
2 0 0.16
3 0 0.60
3 0 0.36
NB. you can also use lambda x: x.cumprod().head(-1).
The solution I found is a bit intricate but works for the test case.
First, get rid of the last row of each group:
ff = dd.groupby(lambda x:x, as_index=False).apply(lambda x: x.iloc[:-1])
ff
Then, restore the original index, group-by again and use pandas cumprod:
ff.reset_index().set_index('level_1').groupby(lambda x:x).cumprod()
Is there a more direct way?

Applying numpy random.choice to randomise categories with probabilities from pandas df column

Just trying to generate a new column in a dataframe, which takes the value 1 or 0 based on a probability located in other columns in the same row.
with dummy data: df = pd.DataFrame({'a': [.1, .2, .3, .4], 'b': [.9, .8, .7, .6']})
I'm hoping to add a third column c, which in the first row for instance would have a .1 probability of being 1, and a .9 of being 0. And so on.
First attempt was defining a function and using apply:
def randomiser(x):
return np.random.choice([1,0], size=(1, 1), p=[df.loc[[x]]['a'], -df.loc[[x]]['b']])
df['probability'] = df.apply(lambda x: randomiser(x), axis=1)
However this would throw up an error about two many values being supplied to p, so I don't think it's iterating properly.
Second I tried a for loop:
for row in df.iterrows():
row['probability'] = np.random.choice([1,0], size=(1, 1), p=[df.loc[[row]]['a'], -df.loc[[row]]['b']])
But this leads to a TypeError complaining that series objects are mutable.
Finally I tried pulling the relevant columns out into tuples or lists, but with similar results.
Any thoughts?
Thanks
Because of how apply() works you don't need to specify df.loc[] you must use x and the name of the column you wish get the values from. Try with the following:
df = pd.DataFrame({'a': [.1, .2, .3, .4], 'b': [.9, .8, .7, .6]})
def randomiser(x):
return np.random.choice([1,0], size=(1, 1), p=[x['a'], x['b']])[0][0]
df['probability'] = df.apply(lambda x: randomiser(x), axis=1)
This outputs:
a b probability
0 0.1 0.9 1
1 0.2 0.8 0
2 0.3 0.7 0
3 0.4 0.6 1
df["probability"] = [np.random.choice([1, 0], p=probs).item()
for probs in df[["a", "b"]].values]
With this list comprehension, we pass each row of df to np.random.choice as probabilities and choose from [1, 0] respectively (.item is there to grab the scalar from 1-entry array).
to get (for example)
a b probability
0 0.1 0.9 0
1 0.2 0.8 0
2 0.3 0.7 0
3 0.4 0.6 1

Finding counts of relative and absolute fluctuations in dataframe where each row contains a timeseries

I have a dataframe containing a table of financial timeseries, with each row having the columns:
ID of that timeseries
a Target value (against which we want to measure deviations, both relative and absolute)
and a timeseries of values for various dates: 1/01, 1/02, 1/03, ...
We want to calculate the fluctuation counts, both relative and absolute, for every row/ID's timeseries. Then we want to find which row/ID has the most fluctuations/'spikes', as follows:
First, we find difference between two timeseries values and estimate a threshold. Threshold represents how much difference is allowed between two values before we declare that a 'fluctuation' or 'spike'. If the difference is higher than the threshold you set, between any two columns's values then it's a spike.
However, we need to ensure that the threshold is generic and works with both % and absolute values between any two values in any row.
So basically, we find a threshold in a percentage form (make an educated prediction) as we have one row values represented in "%" form. Plus, '%' form will also work properly with the absolute value as well.
The output should be a new column fluctuation counts (FCount), both relative and absolute, for every row/ID.
Code:
import pandas as pd
# Create sample dataframe
raw_data = {'ID': ['A1', 'B1', 'C1', 'D1'],
'Domain': ['Finance', 'IT', 'IT', 'Finance'],
'Target': [1, 2, 3, 0.9%],
'Criteria':['<=', '<=', '>=', '>='],
"1/01":[0.9, 1.1, 2.1, 1],
"1/02":[0.4, 0.3, 0.5, 0.9],
"1/03":[1, 1, 4, 1.1],
"1/04":[0.7, 0.7, 0.1, 0.7],
"1/05":[0.7, 0.7, 0.1, 1],
"1/06":[0.9, 1.1, 2.1, 0.6],}
df = pd.DataFrame(raw_data, columns = ['ID', 'Domain', 'Target','Criteria', '1/01',
'1/02','1/03', '1/04','1/05', '1/06'])
ID Domain Target Criteria 1/01 1/02 1/03 1/04 1/05 1/06
0 A1 Finance 1 <= 0.9 0.4 1.0 0.7 0.7 0.9
1 B1 IT 2 <= 1.1 0.3 1.0 0.7 0.7 1.1
2 C1 IT 3 >= 2.1 0.5 4.0 0.1 0.1 2.1
3 D1 Finance 0.9% >= 1.0 0.9 1.1 0.7 1.0 0.6
And here's the expect output with a fluctuation count (FCount) column. Then we can get whichever ID has the largest FCount.
ID Domain Target Criteria 1/01 1/02 1/03 1/04 1/05 1/06 FCount
0 A1 Finance 1 <= 0.9 0.4 1.0 0.7 0.7 0.9 -
1 B1 IT 2 <= 1.1 0.3 1.0 0.7 0.7 1.1 -
2 C1 IT 3 >= 2.1 0.5 4.0 0.1 0.1 2.1 -
3 D1 Finance 0.9% >= 1.0 0.9 1.1 0.7 1.0 0.6 -
Given,
# importing pandas as pd
import pandas as pd
import numpy as np
# Create sample dataframe
raw_data = {'ID': ['A1', 'B1', 'C1', 'D1'],
'Domain': ['Finance', 'IT', 'IT', 'Finance'],
'Target': [1, 2, 3, '0.9%'],
'Criteria':['<=', '<=', '>=', '>='],
"1/01":[0.9, 1.1, 2.1, 1],
"1/02":[0.4, 0.3, 0.5, 0.9],
"1/03":[1, 1, 4, 1.1],
"1/04":[0.7, 0.7, 0.1, 0.7],
"1/05":[0.7, 0.7, 0.1, 1],
"1/06":[0.9, 1.1, 2.1, 0.6],}
df = pd.DataFrame(raw_data, columns = ['ID', 'Domain', 'Target','Criteria', '1/01',
'1/02','1/03', '1/04','1/05', '1/06'])
It is easier to tackle this problem by breaking it into two parts (absolute thresholds and relative thresholds) and going through it step by step on the underlying numpy arrays.
EDIT: Long explanation ahead, skip to the end for just the final function
First, create a list of date columns to access only the relevant columns in every row.
date_columns = ['1/01', '1/02','1/03', '1/04','1/05', '1/06']
df[date_columns].values
#Output:
array([[0.9, 0.4, 1. , 0.7, 0.7, 0.9],
[1.1, 0.3, 1. , 0.7, 0.7, 1.1],
[2.1, 0.5, 4. , 0.1, 0.1, 2.1],
[1. , 0.9, 1.1, 0.7, 1. , 0.6]])
Then we can use np.diff to easily get differences between the dates on the underlying array. We will also take an absolute because that is what we are interested in.
np.abs(np.diff(df[date_columns].values))
#Output:
array([[0.5, 0.6, 0.3, 0. , 0.2],
[0.8, 0.7, 0.3, 0. , 0.4],
[1.6, 3.5, 3.9, 0. , 2. ],
[0.1, 0.2, 0.4, 0.3, 0.4]])
Now, just worrying about the absolute thresholds, it is as simple as just checking if the values in the differences are greater than a limit.
abs_threshold = 0.5
np.abs(np.diff(df[date_columns].values)) > abs_threshold
#Output:
array([[False, True, False, False, False],
[ True, True, False, False, False],
[ True, True, True, False, True],
[False, False, False, False, False]])
We can see that the sum over this array for every row will give us the result we need (sum over boolean arrays use the underlying True=1 and False=0. Thus, you are effectively counting how many True are present). For Percentage thresholds, we just need to do an additional step, dividing all differences with the original values before comparison. Putting it all together.
To elaborate:
We can see how the sum along each row can give us the counts of values crossing absolute threshold as follows.
abs_fluctuations = np.abs(np.diff(df[date_columns].values)) > abs_threshold
print(abs_fluctuations.sum(-1))
#Output:
[1 2 4 0]
To start with relative thresholds, we can create the differences array same as before.
dates = df[date_columns].values #same as before, but just assigned
differences = np.abs(np.diff(dates)) #same as before, just assigned
pct_threshold=0.5 #aka 50%
print(differences.shape) #(4, 5) aka 4 rows, 5 columns if you want to think traditional tabular 2D shapes only
print(dates.shape) #(4, 6) 4 rows, 6 columns
Now, note that the differences array will have 1 less number of columns, which makes sense too. because for 6 dates, there will be 5 "differences", one for each gap.
Now, just focusing on 1 row, we see that calculating percent changes is simple.
print(dates[0][:2]) #for first row[0], take the first two dates[:2]
#Output:
array([0.9, 0.4])
print(differences[0][0]) #for first row[0], take the first difference[0]
#Output:
0.5
a change from 0.9 to 0.4 is a change of 0.5 in absolute terms. but in percentage terms, it is a change of 0.5/0.9 (difference/original) * 100 (where i have omitted the multiplication by 100 to make things simpler)
aka 55.555% or 0.5555..
The main thing to realise at this step is that we need to do this division against the "original" values for all differences to get percent changes.
However, dates array has one "column" too many. So, we do a simple slice.
dates[:,:-1] #For all rows(:,), take all columns except the last one(:-1).
#Output:
array([[0.9, 0.4, 1. , 0.7, 0.7],
[1.1, 0.3, 1. , 0.7, 0.7],
[2.1, 0.5, 4. , 0.1, 0.1],
[1. , 0.9, 1.1, 0.7, 1. ]])
Now, i can just calculate relative or percentage changes by element-wise division
relative_differences = differences / dates[:,:-1]
And then, same thing as before. pick a threshold, see if it's crossed
rel_fluctuations = relative_differences > pct_threshold
#Output:
array([[ True, True, False, False, False],
[ True, True, False, False, True],
[ True, True, True, False, True],
[False, False, False, False, False]])
Now, if we want to consider whether either one of absolute or relative threshold is crossed, we just need to take a bitwise OR | (it's even there in the sentence!) and then take the sum along rows.
Putting all this together, we can just create a function that is ready to use. Note that functions are nothing special, just a way of grouping together lines of code for ease of use. using a function is as simple as calling it, you have been using functions/methods without realising it all the time already.
date_columns = ['1/01', '1/02','1/03', '1/04','1/05', '1/06'] #if hardcoded.
date_columns = df.columns[5:] #if you wish to assign dynamically, and all dates start from 5th column.
def get_FCount(df, date_columns, abs_threshold=0.5, pct_threshold=0.5):
'''Expects a list of date columns with atleast two values.
returns a 1D array, with FCounts for every row.
pct_threshold: percentage, where 1 means 100%
'''
dates = df[date_columns].values
differences = np.abs(np.diff(dates))
abs_fluctuations = differences > abs_threshold
rel_fluctuations = differences / dates[:,:-1] > pct_threshold
return (abs_fluctuations | rel_fluctuations).sum(-1) #we took a bitwise OR. since we are concerned with values that cross even one of the thresholds.
df['FCount'] = get_FCount(df, date_columns) #call our function, and assign the result array to a new column
print(df['FCount'])
#Output:
0 2
1 3
2 4
3 0
Name: FCount, dtype: int32
Assuming you want pct_changes() accross all columns in a row with a threshold, you can also try pct_change() on axis=1:
thresh_=0.5
s=pd.to_datetime(df.columns,format='%d/%m',errors='coerce').notna() #all date cols
df=df.assign(Count=df.loc[:,s].pct_change(axis=1).abs().gt(0.5).sum(axis=1))
Or:
df.assign(Count=df.iloc[:,4:].pct_change(axis=1).abs().gt(0.5).sum(axis=1))
ID Domain Target Criteria 1/01 1/02 1/03 1/04 1/05 1/06 Count
0 A1 Finance 1.0 <= 0.9 0.4 1.0 0.7 0.7 0.9 2
1 B1 IT 2.0 <= 1.1 0.3 1.0 0.7 0.7 1.1 3
2 C1 IT 3.0 >= 2.1 0.5 4.0 0.1 0.1 2.1 4
3 D1 Finance 0.9 >= 1.0 0.9 1.1 0.7 1.0 0.6 0
Try a loc and an iloc and a sub and an abs and a sum and an idxmin:
print(df.loc[df.iloc[:, 4:].sub(df['Target'].tolist(), axis='rows').abs().sum(1).idxmin(), 'ID'])
Output:
D1
Explanation:
I first get the columns staring from the 4th one, then simply subtract each row with the corresponding Target column.
Then get the absolute value of it, so -1.1 will be 1.1 and 1.1 will be still 1.1, then sum each row together and get the row with the lowest number.
Then use a loc to get that index in the actual dataframe, and get the ID column of it which gives you D1.
The following is much cleaner pandas idiom and improves on #ParitoshSingh's version. It's much cleaner to keep two separate dataframes:
a ts (metadata) dataframe for the timeseries columns 'ID', 'Domain', 'Target','Criteria'
a values dataframe for the timeseries values (or 'dates' as the OP keeps calling them)
and use ID as the common index for both dataframes, now you get seamless merge/join and also on any results like when we call compute_FCounts().
now there's no need to pass around ugly lists of column-names or indices (into compute_FCounts()). This is way better deduplication as mentioned in comments. Code for this is at bottom.
Doing this makes compute_FCount just reduce to a four-liner (and I improved #ParitoshSingh's version to use pandas builtins df.diff(axis=1), and then pandas .abs(); also note that the resulting series is returned with the correct ID index, not 0:3; hence can be used directly in assignment/insertion/merge/join):
def compute_FCount_df(dat, abs_threshold=0.5, pct_threshold=0.5):
""""""Compute FluctuationCount for all timeseries/rows""""""
differences = dat.diff(axis=1).iloc[:, 1:].abs()
abs_fluctuations = differences > abs_threshold
rel_fluctuations = differences / dat.iloc[:,:-1] > pct_threshold
return (abs_fluctuations | rel_fluctuations).sum(1)
where the boilerplate to set up two separate dataframes is at bottom.
Also note it's cleaner not to put the fcounts series/column in either values (where it definitely doesn't belong) or ts (where it would be kind of kludgy). Note that the
#ts['FCount']
fcounts = compute_FCount_df(values)
>>> fcounts
A1 2
B1 2
C1 4
D1 1
and this allows you to directly get the index (ID) of the timeseries with most 'fluctuations':
>>> fcounts.idxmax()
'C1'
But really since conceptually we're applying the function separately row-wise to each row of timeseries values, we should use values.apply(..., axis=1) :
values.apply(compute_FCount_ts, axis=1, reduce=False) #
def compute_FCount_ts(dat, abs_threshold=0.5, pct_threshold=0.5):
"""Compute FluctuationCount for single timeseries (row)"""
differences = dat.diff().iloc[1:].abs()
abs_fluctuations = differences > abs_threshold
rel_fluctuations = differences / dat.iloc[:,:-1] > pct_threshold
return (abs_fluctuations | rel_fluctuations).sum(1)
(Note: still trying to debug the "Too many indexers" pandas issue
)
Last, here's the boilerplate code to set up two separate dataframes, with shared index ID:
import pandas as pd
import numpy as np
ts = pd.DataFrame(index=['A1', 'B1', 'C1', 'D1'], data={
'Domain': ['Finance', 'IT', 'IT', 'Finance'],
'Target': [1, 2, 3, '0.9%'],
'Criteria':['<=', '<=', '>=', '>=']})
values = pd.DataFrame(index=['A1', 'B1', 'C1', 'D1'], data={
"1/01":[0.9, 1.1, 2.1, 1],
"1/02":[0.4, 0.3, 0.5, 0.9],
"1/03":[1, 1, 4, 1.1],
"1/04":[0.7, 0.7, 0.1, 0.7],
"1/05":[0.7, 0.7, 0.1, 1],
"1/06":[0.9, 1.1, 2.1, 0.6]})

Write a user defined fillna function in pandas dataframe to fill np.nan different values with conditions

Considering the following pandas dataframe:
import pandas as pd
change = [0.475, 0.625, 0.1, 0.2, -0.1, -0.75, 0.1, -0.1, 0.2, -0.2]
position = [1.0, 1.0, nan, nan, nan, -1.0, nan, nan, nan, nan]
date = ['20150101', '20150102', '20150103', '20150104', '20150105', '20150106', '20150107', '20150108', '20150109', '20150110']
pd.DataFrame({'date': date, 'position': position, 'change': change})
Outputs
date change position
20150101 0.475 1
20150102 0.625 1
20150103 0.1 np.nan
20150104 0.2 np.nan
20150105 -0.1 np.nan
20150106 -0.75 -1
20150107 0.1 np.nan
20150108 -0.1 np.nan
20150109 0.2 np.nan
20150110 -0.2 np.nan
I want to fillna with the following rules:
For rows whose "position" value is np.nan, if value of "change" has the same sign of last non-null value of position (change * position>0, such as 0.1*1 and 0.2*1 >0), we fillna with last non-null value.
For rows whose "position" value is np.nan, if value of "change" has the same sign of last non-null value value of position (change * position <=0 such as -1*0.1), we fillna with 0.
Once one np.nan is filled with 0, the following np.nan will be filled with 0 as well.
The following are the expected results from the sample data frame:
date change position
20150101 0.475 1
20150102 0.625 1
20150103 0.1 1
20150104 0.2 1
20150105 -0.1 0
20150106 -0.75 -1
20150107 0.1 0
20150108 -0.1 0
20150109 0.2 0
20150110 -0.2 0
EDIT:
The method I developed is the following:
while(any(np.isnan(x['position']))):
conditions = [(np.isnan(x['position'])) & (x['position'].shift(1) * x['change'] > 0),
(np.isnan(x['position'])) & (x['position'].shift(1) * x['change'] <= 0)]
choices = [x['position'].shift(1), 0]
x['position'] = np.select(conditions, choices, default=x['position'])
but as you can see, it is not very satisfying, and very slow if you have a 80,000,000 rows of data.
Any suggestions? thanks for the help!
I think your code is pretty solid, the main issue is you are iterating through it more times than you need to. shift() only goes back one line at a time, but if you change that to fillna(method='ffill') then you essentially get an unlimitied number of shifts but only have to do this one time instead of with multiple iterations (how many iterations will depend on your data).
conditions = [
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']>0),
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']<=0)]
But I believe you can go one step further and eliminate the while by adding another fillna at the end:
conditions = [
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']>0),
(np.isnan(x['position'])) & (x['position'].fillna(method='ffill')*x['change']<=0)]
choices=[x['position'].shift(1),0]
x['position'] = np.select(conditions,choices,default=x['position'])
x['position'] = x['position'].fillna(method='ffill')
On your sample data, the first change is about 2x faster than your code, and the second is about 4x. I get the same answers as you, but of course you'll want to verify this on the real data to be sure.

Categories

Resources