Capped / Constrained Weights - python

I have a dataframe of weights, in which I want to constrain the maximum weight for any one element to 30%. However in doing this, the sum of the weights becomes less than 1, so the weights of all other elements should be uniformly increased, and then repetitively capped at 30% until the sum of all weights is 1.
For example:
If my data is in a pandas data frame, how can I do this efficiently?
Note: in reality I have like 20 elements which I want to cap at 10%... so there is much more processing involved. I also intent to run this step 1000s of times.

#jpp
The following is a rough approach, modified from your answer to iteratively solveand re-cap. It doenst produce a perfect answer though... and having a while loop makes it inefficient. Any ideas how this could be improved?
import pandas as pd
import numpy as np
cap = 0.1
df = pd.DataFrame({'Elements': list('ABCDEFGHIJKLMNO'),
'Values': [17,11,7,5,4,4,3,2,1.5,1,1,1,0.8,0.6,0.5]})
df['Uncon'] = df['Values']/df['Values'].sum()
df['Con'] = np.minimum(cap, df['Uncon'])
while df['Con'].sum() < 1 or len(df['Con'][df['Con']>cap]) >=1:
df['Con'] = np.minimum(cap, df['Con'])
nonmax = df['Con'].ne(cap)
adj = (1 - df['Con'].sum()) * df['Con'].loc[nonmax] /
df['Uncon'].loc[nonmax].sum()
df['Con'] = df['Con'].mask(nonmax, df['Con'] + adj)
print(df)
print(df['Con'].sum())

Here's one vectorised solution. The idea is to calculate an adjustment and distribute it proportionately among the non-capped values.
df = pd.DataFrame({'Elements': list('ABCDE'),
'Uncon': [0.53, 0.34, 0.06, 0.03, 0.03]})
df['Con'] = np.minimum(0.30, df['Uncon'])
nonmax = df['Con'].ne(0.30)
adj = (1 - df['Con'].sum()) * df['Uncon'].loc[nonmax] / df['Uncon'].loc[nonmax].sum()
df['Con'] = df['Con'].mask(nonmax, df['Uncon'] + adj)
print(df)
Elements Uncon Con
0 A 0.53 0.3
1 B 0.34 0.3
2 C 0.06 0.2
3 D 0.03 0.1
4 E 0.03 0.1

Related

Minimizing function using scipy with constraints

I have the following constraints for the problem,
I want to minimize the sum of squared differences of w_i, uw_i divided by SUM(uw) following these restrictions:
1. w_i is at maximum, ul
2. w_i is, at least, 0.05
3. The sum of all w for a sector code can not be bigger than 0.50
So basically I want to generate all w_i for each row, however, I dont know how to implement the third restriction with scipy.
With scipy.optimize.lsq_linear I can force the first two conditions with bound = (0.05, ul), but I don't know how to force the third one.
import pandas as pd
import scipy
import numpy as np
df = pd.read_csv("https://raw.githubusercontent.com/norhther/datasets/main/data(1).csv")
df = df.drop("Unnamed: 0", axis = 1)
df
I think you are trying to do something like this:
import pandas as pd
import scipy
import numpy as np
'''
Minimize the sum of squared differences of w_i, uw_i divided by SUM(uw) following these restrictions:
Constraints:
1. w_i is at maximum, ul [NOTE: I think this should say 'minimum']
2. w_i is, at least, 0.05 [NOTE: I think this should say 'at most']
3. The sum of all w for a sector code can not be bigger than 0.50
'''
df = pd.read_csv("https://raw.githubusercontent.com/norhther/datasets/main/data(1).csv")
df = df.drop("Unnamed: 0", axis = 1)
print(df)
gb = df.groupby('Sector Code')['ul']
codeCounts = gb.count().to_list()
cumCounts = [0] + [sum(codeCounts[:i + 1]) for i in range(len(codeCounts))]
newIdx = []
for code, dfGp in gb:
newIdx += list(dfGp.index)
df = df.reindex(newIdx)
# For each unique Sector Code, create constraint that 0.50 minus the sum of w for that code must be non-negative:
def foo(i, c):
# return a closure that acts as a constraint for the i'th interval defined by c[i-1]:c[i]
def bar(x):
return 0.50 - sum(x[c[i-1]:c[i]])
return bar
cons = [{'type': 'ineq', 'fun': foo(i, cumCounts)} for i in range(1, len(cumCounts))]
# Value of bounds argument to enforce ul <= w_i <= 0.05
bnds = tuple((ul_i, 0.05) for code, ul_group in gb for ul_i in ul_group)
# Initial guess
n = len(df.index)
w_i = np.ones(n) * (1 / n)
# The objective function to be minimized
uw_sum = df.uw.sum()
def fun(w):
return (pd.Series(w) - df.uw).pow(2).sum() / uw_sum
# Optimize using scipy minimize() function
from scipy.optimize import minimize
res = minimize(fun, w_i, method='SLSQP', bounds=bnds, constraints=cons)
print(res)
df['w'] = res.x
df = df.reindex(range(len(df.index)))
print(df)
Explanation:
Use groupby() to get the row count for each unique Sector Code value and also to construct an index ordered by Sector Code, which we use to re-order the original input df
create a list of constraint dictionaries to be passed to the optimizer, one for each Sector Code, which will use python closures to constrain the sum of the corresponding solution elements to be <= 0.50
create a sequence of bounds to constrain solution elements w_i to be between ul and 0.05
create the objective function to return the sum of squared differences of w_i, uw_i divided by sum(uw)
call minimize() from scipy.optimize with the above constraints, bounds, objective function and an initial guess
add a column to the dataframe with the result and call reindex() to restore the original row order.
Output:
uw ul Sector Code
0 0.006822 0.050000 40
1 0.017949 0.050000 40
2 0.001906 0.031289 40
3 0.000904 0.040318 20
4 0.001147 0.046904 15
... ... ... ...
1226 0.003653 0.033553 10
1227 0.002556 0.031094 10
1228 0.002816 0.041031 10
1229 0.010216 0.050000 40
1230 0.001559 0.033480 55
[1231 rows x 3 columns]
fun: 0.4487707682194904
jac: array([0.02089997, 0.00466947, 0.01358654, ..., 0.02070332, nan,
0.02188896])
message: 'Positive directional derivative for linesearch'
nfev: 919
nit: 5
njev: 1
status: 8
success: False
x: array([0.03730054, 0.0247585 , 0.02171931, ..., 0.03300862, 0.05 ,
0.03348039])
uw ul Sector Code w
0 0.006822 0.050000 40 0.050000
1 0.017949 0.050000 40 0.050000
2 0.001906 0.031289 40 0.031289
3 0.000904 0.040318 20 0.040318
4 0.001147 0.046904 15 0.046904
... ... ... ... ...
1226 0.003653 0.033553 10 0.033553
1227 0.002556 0.031094 10 0.031094
1228 0.002816 0.041031 10 0.041031
1229 0.010216 0.050000 40 0.050000
1230 0.001559 0.033480 55 0.033480
[1231 rows x 4 columns]
Note that success is False, so perhaps some work remains. Hopefully the dataframe related manipulations are helpful in addressing your question.
You already got a working answer from #constantstranger. IMO, there's just one problem: it's quite slow. More precisely, it took more than a minute to solve the problem on my machine.
Therefore, some notes on what could be done in order to speed up the solver in the following:
Since Python has a noticeable overhead when calling functions, it's a good idea to implement all functions as fast as possible. For instance, evaluating one vectorial constraint function is faster than evaluating multiple scalar constraint functions.
At the moment, all derivatives (the objective gradient and the constraint Jacobian) are approximated by finite differences. This is a real bottleneck because each evaluation of the approximated derivative goes in hand with multiple objective/constraint function evaluations. Instead, it's highly recommended to provide the exact derivatives or use algorithmic differentiation.
Last but not least, scipy.optimize.minimize is only suited for small to mid-sized problems at least. If you are willing to use another package, you could use IPOPT, the state-of-the-art NLP solver. The cyipopt package provides a scipy-like interface, so it isn't hard switching from scipy.optimize.minimize.
Besides from that, your problem is a (convex) quadratic optimization problem and can be formulated as follows:
min f(w) s.t. A*w <= 0.5, u_l <= w <= 0.05
with
f(w) = (1/sum(u_w)) * ||w - u_w||^2_2 = (1/sum(u_w)) * (w'Iw - 2u_w'*w + u_w'u_w)
where A[i,j] = 1 if w[j] belongs to sector i and 0 otherwise.
Then, solving the problem with IPOPT (note that we pass the exact derivatives) looks like this:
import numpy as np
import pandas as pd
from cyipopt import minimize_ipopt
# dataframe
df = pd.read_csv("https://raw.githubusercontent.com/norhther/datasets/main/data(1).csv")
df = df.drop("Unnamed: 0", axis = 1)
# sectors
sectors = df["Sector Code"].unique()
# building the matrix A
A = np.zeros((sectors.size, len(df)))
for i, sec in enumerate(sectors):
indices = df[df["Sector Code"] == sec].index.values
A[i, indices] = 1
uw = df['uw'].values
uw_sum = uw.sum()
# objective
def obj(w):
return np.sum((w - uw)**2) / uw_sum
# objective gradient
def grad(w):
return (2*w - 2*uw) / uw_sum
# Linear Constraint A # w <= 0.5 <=> 0.5 - A # w >= 0
cons = [{'type': 'ineq', 'fun': lambda w: 0.5 - A # w, 'jac': lambda w: -A}]
# variable bounds
bounds = [(u_i, 0.05) for u_i in df.ul.values]
# feasible initial guess
w0 = np.ones(len(df)) / len(df)
# solve the problem
res = minimize_ipopt(obj, x0=w0, jac=grad, bounds=bounds, constraints=cons)
print(res)
On my machine, this terminates in less than 2 seconds and yields
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit https://github.com/coin-or/Ipopt
******************************************************************************
fun: 0.4306218505716169
info: {'x': array([0.05 , 0.05 , 0.03128946, ..., 0.04103131, 0.05 ,
0.03348038]), 'g': array([-3.51687688, -9.45217602, -7.88799127, -1.78825803, -1.86650095,
-5.09092925, -2.11181422, -1.35485327, -1.15847276, 0.35 ]), 'obj_val': 0.4306218505716169, 'mult_g': array([-1.000000e+03, -1.000000e+03, -1.000000e+03, -1.000000e+03,
-1.000000e+03, -1.000000e+03, -1.000000e+03, -1.000000e+03,
-1.000000e+03, -2.857166e-09]), 'mult_x_L': array([1000.02960821, 1000.02197802, 1000.00000005, ..., 1000.00000011,
1000.02728049, 1000.00000006]), 'mult_x_U': array([0.00000000e+00, 0.00000000e+00, 5.34457820e-08, ...,
1.11498931e-07, 0.00000000e+00, 6.05340266e-08]), 'status': 2, 'status_msg': b'Algorithm converged to a point of local infeasibility. Problem may be infeasible.'}
message: b'Algorithm converged to a point of local infeasibility. Problem may be infeasible.'
nfev: 13
nit: 9
njev: 7
status: 2
success: False
x: array([0.05 , 0.05 , 0.03128946, ..., 0.04103131, 0.05 ,
0.03348038])
[Finished in 1.9s]

Statsmodels power analysis number of observations

I'm confused with statsmodels API of the TTestIndPower class.
In the plot_power function the number of observations plotted are the total number of samples or the number of samples of each group?
It's per group.
This is the power_plot I generated with statsmodels
from statsmodels.stats.power import TTestIndPower
panalysis = TTestIndPower()
panalysis.plot_power(
dep_var="nobs",
nobs=np.arange(5, 101),
effect_size=np.arange(0.5, 1.5, .2),
alpha=0.01,
ax=ax,
)
And this is the confirmation with R for an effect size (es) of 0.7
> library(pwr)
> pwr.t.test(power = 0.8, sig.level = 0.01, d = 0.7)
Two-sample t test power calculation
n = 49.35336
d = 0.7
sig.level = 0.01
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group

Indexing dynamic vector of class probabilities

For my code, I have a large (up to 40,000) vector of class probabilities. This set of class probabilities also needs to be reweighted regularly, so assume it will change on every call of the code. The vector sums to 1. I need to efficiently search through this for the index corresponding to that probability.
As an example - say the vector was [0.25, 0.25, 0.25, 0.25], uniform prob across 4 objects. My probability result is a 0.67. This corresponds to index 3, since 0.67 > sum(probvec[0:1]) but 0.67 <= sum(probvec[0:2]).
I'm open to changing the probability vector to make it the running sum, i.e. [0.25, 0.5, 0.75, 1], though then I'd also need a suggestion as to how to perform updates.
Any help would be appreciated.
Step 1: pre-compute all the partial sums up to the i-th index.
Step 2: scan your sums_probvec with binary search for obtaining the result in logtime.
import numpy as np
probvec = np.full(4, 0.25)
prob = 0.67
# pre-compute all the partial sums up to the i-th index
sum_probvec = [probvec[0]]
for i in range(1, len(probvec)) :
sum_probvec.append(sum_probvec[i-1] + probvec[i])
# use binary search for logtime results
i = 0
j = len(sum_probvec)
while i != j-1:
mid = (i + j) // 2
if prob > sum_probvec[mid]:
i = mid
else:
j = mid
index = i+2
print (index) # 3

Spiking value inside a loop

I am writing a code to identify the proper dataset from the options in an array fits better to a given value, as below:
import numpy as np
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
thickness = np.array([0.1,0.2,0.4,0.8,1.6,3.2,6.4,12.8,25.6,51.2])
b=np.array([])
a=100
c = 48.4
while c>=0 and a>0.1:
a = find_nearest(thickness,c)
if a > c:
g = np.where(thickness==a)
f = g[0]-1
a = thickness[f]
else:
a = a
c = c - a
print(c)
if c == 0.1:
break
b=np.append(b,a)
itemindex = np.where(thickness==a)
itemindex = itemindex[0]
upper_limit = len(thickness)+1
hj = np.arange(itemindex,upper_limit)
thickness = np.delete(thickness,hj, None)
print(thickness)
slots_sum = np.sum(b)
print("It will be used the following slots: ",b, "representing a total of {:.2f} mm".format(slots_sum))
However, for some reason that could not figured out, when the codes try to find the proper combination of values to reach 48.4, the code skips the in the array the value 0.4 and select 0.2 and 0.1, which results in the sum of 48.3 instead of the correct 48.4. I am banging my head for some days, I will appreciate any help.
[22.8]
[ 0.1 0.2 0.4 0.8 1.6 3.2 6.4 12.8]
[10.]
[0.1 0.2 0.4 0.8 1.6 3.2 6.4]
[3.6]
[0.1 0.2 0.4 0.8 1.6 3.2]
[0.4]
[0.1 0.2 0.4 0.8 1.6]
[0.2]
[0.1]
[0.1]
[]
It will be used the following slots: [25.6 12.8 6.4 3.2 0.2 0.1] representing a total of 48.30 mm.
```
Multiply your inputs by 10 to give integer values and the answer is what you expect.
You will need to compensate for the inexact nature of floating point values if you want to compare the sums of two different lists of floating point values.

How to efficiently calculate a running standard deviation

I have an array of lists of numbers, e.g.:
[0] (0.01, 0.01, 0.02, 0.04, 0.03)
[1] (0.00, 0.02, 0.02, 0.03, 0.02)
[2] (0.01, 0.02, 0.02, 0.03, 0.02)
...
[n] (0.01, 0.00, 0.01, 0.05, 0.03)
I would like to efficiently calculate the mean and standard deviation at each index of a list, across all array elements.
To do the mean, I have been looping through the array and summing the value at a given index of a list. At the end, I divide each value in my "averages list" by n (I am working with a population, not a sample from the population).
To do the standard deviation, I loop through again, now that I have the mean calculated.
I would like to avoid going through the array twice, once for the mean and then once for the standard deviation (after I have a mean).
Is there an efficient method for calculating both values, only going through the array once? Any code in an interpreted language (e.g., Perl or Python) or pseudocode is fine.
The answer is to use Welford's algorithm, which is very clearly defined after the "naive methods" in:
Wikipedia: Algorithms for calculating variance
It's more numerically stable than either the two-pass or online simple sum of squares collectors suggested in other responses. The stability only really matters when you have lots of values that are close to each other as they lead to what is known as "catastrophic cancellation" in the floating point literature.
You might also want to brush up on the difference between dividing by the number of samples (N) and N-1 in the variance calculation (squared deviation). Dividing by N-1 leads to an unbiased estimate of variance from the sample, whereas dividing by N on average underestimates variance (because it doesn't take into account the variance between the sample mean and the true mean).
I wrote two blog entries on the topic which go into more details, including how to delete previous values online:
Computing Sample Mean and Variance Online in One Pass
Deleting Values in Welford’s Algorithm for Online Mean and Variance
You can also take a look at my Java implement; the javadoc, source, and unit tests are all online:
Javadoc: stats.OnlineNormalEstimator
Source: stats.OnlineNormalEstimator.java
JUnit Source: test.unit.stats.OnlineNormalEstimatorTest.java
LingPipe Home Page
The basic answer is to accumulate the sum of both x (call it 'sum_x1') and x2 (call it 'sum_x2') as you go. The value of the standard deviation is then:
stdev = sqrt((sum_x2 / n) - (mean * mean))
where
mean = sum_x / n
This is the sample standard deviation; you get the population standard deviation using 'n' instead of 'n - 1' as the divisor.
You may need to worry about the numerical stability of taking the difference between two large numbers if you are dealing with large samples. Go to the external references in other answers (Wikipedia, etc) for more information.
Here is a literal pure Python translation of the Welford's algorithm implementation from John D. Cook’s excellent Accurately computing running variance article:
File running_stats.py
import math
class RunningStats:
def __init__(self):
self.n = 0
self.old_m = 0
self.new_m = 0
self.old_s = 0
self.new_s = 0
def clear(self):
self.n = 0
def push(self, x):
self.n += 1
if self.n == 1:
self.old_m = self.new_m = x
self.old_s = 0
else:
self.new_m = self.old_m + (x - self.old_m) / self.n
self.new_s = self.old_s + (x - self.old_m) * (x - self.new_m)
self.old_m = self.new_m
self.old_s = self.new_s
def mean(self):
return self.new_m if self.n else 0.0
def variance(self):
return self.new_s / (self.n - 1) if self.n > 1 else 0.0
def standard_deviation(self):
return math.sqrt(self.variance())
Usage:
rs = RunningStats()
rs.push(17.0)
rs.push(19.0)
rs.push(24.0)
mean = rs.mean()
variance = rs.variance()
stdev = rs.standard_deviation()
print(f'Mean: {mean}, Variance: {variance}, Std. Dev.: {stdev}')
Perhaps not what you were asking, but ... If you use a NumPy array, it will do the work for you, efficiently:
from numpy import array
nums = array(((0.01, 0.01, 0.02, 0.04, 0.03),
(0.00, 0.02, 0.02, 0.03, 0.02),
(0.01, 0.02, 0.02, 0.03, 0.02),
(0.01, 0.00, 0.01, 0.05, 0.03)))
print nums.std(axis=1)
# [ 0.0116619 0.00979796 0.00632456 0.01788854]
print nums.mean(axis=1)
# [ 0.022 0.018 0.02 0.02 ]
By the way, there's some interesting discussion in this blog post and comments on one-pass methods for computing means and variances:
Computing sample mean and variance online in one pass
The Python runstats Module is for just this sort of thing. Install runstats from PyPI:
pip install runstats
Runstats summaries can produce the mean, variance, standard deviation, skewness, and kurtosis in a single pass of data. We can use this to create your "running" version.
from runstats import Statistics
stats = [Statistics() for num in range(len(data[0]))]
for row in data:
for index, val in enumerate(row):
stats[index].push(val)
for index, stat in enumerate(stats):
print 'Index', index, 'mean:', stat.mean()
print 'Index', index, 'standard deviation:', stat.stddev()
Statistics summaries are based on the Knuth and Welford method for computing standard deviation in one pass as described in the Art of Computer Programming, Vol 2, p. 232, 3rd edition. The benefit of this is numerically stable and accurate results.
Disclaimer: I am the author the Python runstats module.
Statistics::Descriptive is a very decent Perl module for these types of calculations:
#!/usr/bin/perl
use strict; use warnings;
use Statistics::Descriptive qw( :all );
my $data = [
[ 0.01, 0.01, 0.02, 0.04, 0.03 ],
[ 0.00, 0.02, 0.02, 0.03, 0.02 ],
[ 0.01, 0.02, 0.02, 0.03, 0.02 ],
[ 0.01, 0.00, 0.01, 0.05, 0.03 ],
];
my $stat = Statistics::Descriptive::Full->new;
# You also have the option of using sparse data structures
for my $ref ( #$data ) {
$stat->add_data( #$ref );
printf "Running mean: %f\n", $stat->mean;
printf "Running stdev: %f\n", $stat->standard_deviation;
}
__END__
Output:
Running mean: 0.022000
Running stdev: 0.013038
Running mean: 0.020000
Running stdev: 0.011547
Running mean: 0.020000
Running stdev: 0.010000
Running mean: 0.020000
Running stdev: 0.012566
Have a look at PDL (pronounced "piddle!").
This is the Perl Data Language which is designed for high precision mathematics and scientific computing.
Here is an example using your figures....
use strict;
use warnings;
use PDL;
my $figs = pdl [
[0.01, 0.01, 0.02, 0.04, 0.03],
[0.00, 0.02, 0.02, 0.03, 0.02],
[0.01, 0.02, 0.02, 0.03, 0.02],
[0.01, 0.00, 0.01, 0.05, 0.03],
];
my ( $mean, $prms, $median, $min, $max, $adev, $rms ) = statsover( $figs );
say "Mean scores: ", $mean;
say "Std dev? (adev): ", $adev;
say "Std dev? (prms): ", $prms;
say "Std dev? (rms): ", $rms;
Which produces:
Mean scores: [0.022 0.018 0.02 0.02]
Std dev? (adev): [0.0104 0.0072 0.004 0.016]
Std dev? (prms): [0.013038405 0.010954451 0.0070710678 0.02]
Std dev? (rms): [0.011661904 0.009797959 0.0063245553 0.017888544]
Have a look at PDL::Primitive for more information on the statsover function. This seems to suggest that ADEV is the "standard deviation".
However, it maybe PRMS (which Sinan's Statistics::Descriptive example show) or RMS (which ars's NumPy example shows). I guess one of these three must be right ;-)
For more PDL information, have a look at:
pdl.perl.org (official PDL page).
PDL quick reference guide on PerlMonks
Dr. Dobb's article on PDL
PDL Wiki
Wikipedia entry for PDL
SourceForge project page for PDL
Unless your array is zillions of elements long, don't worry about looping through it twice. The code is simple and easily tested.
My preference would be to use the NumPy array maths extension to convert your array of arrays into a NumPy 2D array and get the standard deviation directly:
>>> x = [ [ 1, 2, 4, 3, 4, 5 ], [ 3, 4, 5, 6, 7, 8 ] ] * 10
>>> import numpy
>>> a = numpy.array(x)
>>> a.std(axis=0)
array([ 1. , 1. , 0.5, 1.5, 1.5, 1.5])
>>> a.mean(axis=0)
array([ 2. , 3. , 4.5, 4.5, 5.5, 6.5])
If that's not an option and you need a pure Python solution, keep reading...
If your array is
x = [
[ 1, 2, 4, 3, 4, 5 ],
[ 3, 4, 5, 6, 7, 8 ],
....
]
Then the standard deviation is:
d = len(x[0])
n = len(x)
sum_x = [ sum(v[i] for v in x) for i in range(d) ]
sum_x2 = [ sum(v[i]**2 for v in x) for i in range(d) ]
std_dev = [ sqrt((sx2 - sx**2)/N) for sx, sx2 in zip(sum_x, sum_x2) ]
If you are determined to loop through your array only once, the running sums can be combined.
sum_x = [ 0 ] * d
sum_x2 = [ 0 ] * d
for v in x:
for i, t in enumerate(v):
sum_x[i] += t
sum_x2[i] += t**2
This isn't nearly as elegant as the list comprehension solution above.
I like to express the update this way:
def running_update(x, N, mu, var):
'''
#arg x: the current data sample
#arg N : the number of previous samples
#arg mu: the mean of the previous samples
#arg var : the variance over the previous samples
#retval (N+1, mu', var') -- updated mean, variance and count
'''
N = N + 1
rho = 1.0/N
d = x - mu
mu += rho*d
var += rho*((1-rho)*d**2 - var)
return (N, mu, var)
so that a one-pass function would look like this:
def one_pass(data):
N = 0
mu = 0.0
var = 0.0
for x in data:
N = N + 1
rho = 1.0/N
d = x - mu
mu += rho*d
var += rho*((1-rho)*d**2 - var)
# could yield here if you want partial results
return (N, mu, var)
note that this is calculating the sample variance (1/N), not the unbiased estimate of the population variance (which uses a 1/(N-1) normalzation factor). Unlike the other answers, the variable, var, that is tracking the running variance does not grow in proportion to the number of samples. At all times it is just the variance of the set of samples seen so far (there is no final "dividing by n" in getting the variance).
In a class it would look like this:
class RunningMeanVar(object):
def __init__(self):
self.N = 0
self.mu = 0.0
self.var = 0.0
def push(self, x):
self.N = self.N + 1
rho = 1.0/N
d = x-self.mu
self.mu += rho*d
self.var += + rho*((1-rho)*d**2-self.var)
# reset, accessors etc. can be setup as you see fit
This also works for weighted samples:
def running_update(w, x, N, mu, var):
'''
#arg w: the weight of the current sample
#arg x: the current data sample
#arg mu: the mean of the previous N sample
#arg var : the variance over the previous N samples
#arg N : the number of previous samples
#retval (N+w, mu', var') -- updated mean, variance and count
'''
N = N + w
rho = w/N
d = x - mu
mu += rho*d
var += rho*((1-rho)*d**2 - var)
return (N, mu, var)
Here's a "one-liner", spread over multiple lines, in functional programming style:
def variance(data, opt=0):
return (lambda (m2, i, _): m2 / (opt + i - 1))(
reduce(
lambda (m2, i, avg), x:
(
m2 + (x - avg) ** 2 * i / (i + 1),
i + 1,
avg + (x - avg) / (i + 1)
),
data,
(0, 0, 0)))
As the following answer describes:
Does Pandas, SciPy, or NumPy provide a cumulative standard deviation function?
The Python Pandas module contains a method to calculate the running or cumulative standard deviation. For that, you'll have to convert your data into a Pandas dataframe (or a series if it is one-dimensional), but there are functions for that.
Here is a practical example of how you could implement a running standard deviation with Python and NumPy:
a = np.arange(1, 10)
s = 0
s2 = 0
for i in range(0, len(a)):
s += a[i]
s2 += a[i] ** 2
n = (i + 1)
m = s / n
std = np.sqrt((s2 / n) - (m * m))
print(std, np.std(a[:i + 1]))
This will print out the calculated standard deviation and a check standard deviation calculated with NumPy:
0.0 0.0
0.5 0.5
0.8164965809277263 0.816496580927726
1.118033988749895 1.118033988749895
1.4142135623730951 1.4142135623730951
1.707825127659933 1.707825127659933
2.0 2.0
2.29128784747792 2.29128784747792
2.5819888974716116 2.581988897471611
I am just using the formula described in this thread:
stdev = sqrt((sum_x2 / n) - (mean * mean))
Responding to Charlie Parker's 2021 question:
I'd like an answer that I can just copy paste to my code in numpy. My input is a matrix of size [N, 1] where N is the number of data points and I already have computed the running mean and I assuming we have computed the running std/variance, how to update we the new batch of data.
Here we have two implementations of a function that takes the original mean, original variance and original size and the new sample and returns the total mean and total variance of the combined original and new sample (to get the standard deviation, just take variance's square root by using **(1/2)). The first uses NumPy, and the second one uses Welford. You may choose the one that best applies to your case.
def mean_and_variance_update_numpy(previous_mean, previous_var, previous_size, sample_to_append):
if type(sample_to_append) is np.matrix:
sample_to_append = sample_to_append.A1
else:
sample_to_append = sample_to_append.flatten()
sample_to_append_mean = np.mean(sample_to_append)
sample_to_append_size = len(sample_to_append)
total_size = previous_size+sample_to_append_size
total_mean = (previous_mean*previous_size+sample_to_append_mean*sample_to_append_size)/total_size
total_var = (((previous_var+(total_mean-previous_mean)**2)*previous_size)+((np.var(sample_to_append)+(sample_to_append_mean-tm)**2)*sample_to_append_size))/total_size
return (total_mean, total_var)
def mean_and_variance_update_welford(previous_mean, previous_var, previous_size, sample_to_append):
if type(sample_to_append) is np.matrix:
sample_to_append = sample_to_append.A1
else:
sample_to_append = sample_to_append.flatten()
pos = previous_size
mean = previous_mean
v = previous_var*previous_size
for value in sample_to_append:
pos += 1
mean_next = mean + (value - mean) / pos
v = v + (value - mean)*(value - mean_next)
mean = mean_next
return (mean, v/pos)
Let's check if it works:
import numpy as np
def mean_and_variance_udpate_numpy:
...
def mean_and_variance_udpate_welford:
...
# Making the samples and results deterministic
np.random.seed(0)
# Our initial sample has 100 samples, we want to append 10
n0, n1 = 100, 10
# Using np.matrix only, because it was in the question. 'np.array' is more common
s0 = np.matrix(1e3+np.random.random_sample(n0)*1e-3).T
s1 = np.matrix(1e3+np.random.random_sample(n1)*1e-3).T
# Precalculating our mean and var for initial sample:
s0mean, s0var = np.mean(s0), np.var(s0)
# Calculating mean and variance for s0+s1 using our NumPy updater
mean_and_variance_update_numpy(s0mean, s0var, len(s0), s1)
# (1000.0004826329636, 8.24577589696613e-08)
# Calculating mean and variance for s0+s1 using our Welford updater
mean_and_variance_update_welford(s0mean, s0var, len(s0), s1)
# (1000.0004826329634, 8.245775896913623e-08)
# Similar results, now checking with NumPy's calculation over the concatenation of s0 and s1
s0s1 = np.concatenate([s0,s1])
(np.mean(s0s1), np.var(s0s1))
# (1000.0004826329638, 8.245775896917313e-08)
Here the three results are closer:
# np(s0s1) (1000.0004826329638, 8.245775896917313e-08)
# np(s0)updnp(s1) (1000.0004826329636, 8.245775896966130e-08)
# np(s0)updwf(s1) (1000.0004826329634, 8.245775896913623e-08)
It is possible to see that the results are very similar.
n=int(raw_input("Enter no. of terms:"))
L=[]
for i in range (1,n+1):
x=float(raw_input("Enter term:"))
L.append(x)
sum=0
for i in range(n):
sum=sum+L[i]
avg=sum/n
sumdev=0
for j in range(n):
sumdev=sumdev+(L[j]-avg)**2
dev=(sumdev/n)**0.5
print "Standard deviation is", dev
Figure I could jump on the old bandwagon. This should work with rbg values
Adapted from
https://math.stackexchange.com/a/2148949
import numpy as np
class IterativeNormStats():
def __init__(self):
"""uint64 max is 18446744073709551615
256**2 = 65536
so we can store 18446744073709551615 / 65536 = 281,474,976,710,656
images before running into overflow issues. I think we'll be ok
"""
self.n = 0
self.rgb_sum = np.zeros(3, dtype=np.uint64)
self.rgb_sq_sum = np.zeros(3, dtype=np.uint64)
def update(self, img_arr):
rgbs = np.reshape(img_arr, (-1, 3)).astype(np.uint64)
self.n += rgbs.shape[0]
self.rgb_sum += np.sum(rgbs, axis=0)
self.rgb_sq_sum += np.sum(np.square(rgbs), axis=0)
def mean(self):
return self.rgb_sum / self.n
def std(self):
return np.sqrt((self.rgb_sq_sum / self.n) - np.square(self.rgb_sum / self.n))
def test_IterativeNormStats():
img_a = np.ones((10, 10, 3), dtype=np.uint8) * (1, 2, 3)
img_b = np.ones((10, 10, 3), dtype=np.uint8) * (2, 4, 6)
img_c = np.ones((10, 10, 3), dtype=np.uint8) * (3, 6, 9)
ins = IterativeNormStats()
for i in range(1000):
for img in [img_a, img_b, img_c]:
ins.update(img)
x = np.vstack([
np.reshape(img_a, (-1, 3)),
np.reshape(img_b, (-1, 3)),
np.reshape(img_c, (-1, 3)),
]*1000)
expected_mean = np.mean(x, axis=0)
expected_std = np.std(x, axis=0)
print(expected_mean)
print(ins.mean())
print(expected_std)
print(ins.std())
assert np.allclose(ins.mean(), expected_mean)
if __name__ == "__main__":
test_IterativeNormStats()
I came across thee welford package that's pretty simple to use:
pip install welford
Then
import numpy as np
from welford import Welford
# Initialize Welford object
w = Welford()
# Input data samples sequentialy
w.add(np.array([0, 100]))
w.add(np.array([1, 110]))
w.add(np.array([2, 120]))
# output
print(w.mean) # mean --> [ 1. 110.]
print(w.var_s) # sample variance --> [1, 100]
print(w.var_p) # population variance --> [ 0.6666 66.66]
# You can add other samples after calculating variances.
w.add(np.array([3, 130]))
w.add(np.array([4, 140]))
# output with added samples
print(w.mean) # mean --> [ 2. 120.]
print(w.var_s) # sample variance --> [ 2.5 250. ]
print(w.var_p) # population variance --> [ 2. 200.]
Notes:
Unlike most othere answers you can feed a Welford object a Numpy array directly
You can even add multiple with Welford.add_all(...)
You can merge independent computations with w1.merge(w2)
You should choose var_p or var_s depending on which one you want to use (Population and Sample variance)
As said, those are variances so you should use np.sqrt to get the associated standard deviation
Here is a simple implementation in python:
class RunningStats:
def __init__(self):
self.mean_x_square = 0
self.mean_x = 0
self.n = 0
def update(self, x):
self.mean_x_square = (self.mean_x_square * self.n + x ** 2) / (self.n + 1)
self.mean_x = (self.mean_x * self.n + x) / (self.n + 1)
self.n += 1
def mean(self):
return self.mean_x
def std(self):
return self.variance() ** 0.5
def variance(self):
return self.mean_x_square - self.mean_x ** 2
Test:
import numpy as np
running_stats = RunningStats()
v = [1.1, 3.5, 5, -8.1, 91]
[running_stats.update(x) for x in v]
print(running_stats.mean() - np.mean(v))
print(running_stats.std() - np.std(v))
print(running_stats.variance() - np.var(v))

Categories

Resources