Notes on calculating online statistics

In this article, I collect simple derivations that demonstrate how to calculate several statistical quantities using point-by-point online statistics. A particular emphasis is made to support multiple weighting schemes (not limited to just uniform weights).


Introduction

Typical discussions of statistical calculations make use of operations which are most easily expressed over batches (e.g. vectors) of data. There can be situations, though, where data is provided incrementally, and the typical definitions are not desirable. An example of such is in streaming data applications where there may be a need to calculate a statistic over long periods of time where it is not possible to store the entire history in working memory simultaneously. Therefore, the goal is to recast the traditional statistical calculations in a form which is amenable to being updated point-by-point while maintaining minimal state. More specifically, in this article we’ll use the term “online statistics” to mean any calculation which can be calculated in a point-by-point manner while requiring a fixed-size (and finite) amount of state be propagated/stored at each update for any length of data stream.

Weighted Online Mean

The (arguably) simplest statistic to start with is the average of a vector. Because we’re interested in the general case, we’ll skip the simple arithmetic mean (assuming uniform weights) and go directly to the weighted mean $\mu_n$. For a data vector of $n$ data points ${x_1, x_2, \ldots, x_n}$, the weighted mean is defined as the quantity

\begin{align} \mu_n &\equiv \frac{\sum_{i=1}^n w_i x_i}{\sum_{i=1}^n w_i} \end{align}

The online calculation follows from separating the $n$-th term from the rest of the series and simplifying.

\begin{align*} \mu_n &= \frac{1}{W_n} \left[ \sum_{i=1}^{n-1} w_i x_i + w_n x_n \right] & \qquad\qquad W_n &\equiv \sum_{i=1}^n w_i = W_{n-1} + w_n \\ {} &= \frac{1}{W_n} \left[ \frac{W_{n-1}}{W_{n-1}} \sum_{i=1}^{n-1} w_i x_i + w_n x_n \right] & \Aboxed{ W_n &= W_{n-1} + w_n } \\ {} &= \frac{W_{n-1}}{W_n} \mu_{n-1} + \frac{w_n}{W_n} x_n & \Aboxed{ \frac{W_{n-1}}{W_n} &= 1 - \frac{w_n}{W_n} } \end{align*}

Therefore, the online weighted mean is incrementally updated via the recursion

\begin{align} \Aboxed{ \mu_n &= \left( 1 - \frac{w_n}{W_n} \right) \mu_{n-1} + \frac{w_n}{W_n} x_n } \end{align}

Weighted Online Variance

Progressing from the mean, the next obvious statistic is the standard deviation. For reasons which will soon become clear, we’ll instead consider the biased weighted variance $\sigma_n^2$, from which the standard deviation can be very easily derived. Let the biased variance be defined as the summations

\begin{align} \sigma_n^2 &\equiv \frac{\sum_{i=1}^n w_i (x_i - \mu_n)^2}{\sum_{i=1}^n w_i} \end{align}

where the weighted mean $\mu_n$ is defined in the previous section. To make progress with calculating the online variance, it is actually easier to first consider the sum of weighted squared deviations $\beta_n$ which ignores the normalization of dividing by the sum of weights.

\begin{align} \beta_n &\equiv \sum_{i=1}^n w_i (x_i - \mu_n)^2 \end{align}

For $\beta_n$, we can derive the online update via Welford’s Algorithm. (The purpose of using Welford’s algorithm rather than expanding the summation naively is that under numerical computation in finite floating point numbers, the naive calculation is more apt to suffer loss of precision due to catastrophic cancellation, which Welford’s algorithm is less prone to.) The online update to the sum of squared deviations is

\begin{align} \Aboxed{ \beta_n = \beta_{n-1} + w_n (x_n - \mu_n) (x_n - \mu_{n-1}) } \end{align}

(For a detailed derivation, see the appendix.)

Substituting this back into the expression for the variance,

\begin{align*} \sigma_n^2 &= \frac{\beta_n}{W_n} \\ {} &= \frac{1}{W_n} \left[ \beta_{n-1} + w_n (x_n - \mu_n) (x_n - \mu_{n-1}) \right] \end{align*}

which finally provides the online update to the biased variance $\sigma_n^2$:

\begin{align} \Aboxed{ \sigma_n^2 &= \left(1 - \frac{w_n}{W_n}\right) \sigma_{n-1}^2 + \frac{w_n}{W_n} (x_n - \mu_n)(x_n - \mu_{n-1}) } \end{align}

Online Linear Polynomial Regression

To incrementally calculate the fit of a degree-1 (linear) polynomial, we’ll start with the matrix-form expression for weighted linear regression matrices. Recall that the model $y = mx + b$ translates for vectors of data points $(\bm x, \bm y)$ of length $n$ with model coefficients $m$ and $b$ to

\begin{align*} \bm y &= \bm X \bm a & \text{where } \bm a &\equiv \begin{bmatrix} b \\ m \end{bmatrix} \\ & & \bm X &\equiv \begin{bmatrix} \bm 1 & \bm x \end{bmatrix} = \begin{bmatrix} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_n \end{bmatrix} \end{align*}

The weighted least-squares solution finds $\bm a$ such that for that an observed set of values $\bm{\hat y}$, the sum of weighted squared deviations with respect to weights $\bm w$ is minimized. The closed-form solution is:

\begin{align} \bm a = \left(\bm X^\top \bm W \bm X\right)^{-1} \bm X^\top \bm W \bm y \qquad\text{minimizes}\qquad \beta_n &= (\bm{\hat y} - \bm X \bm a)^\top \bm W (\bm{\hat y} - \bm X \bm a) \\ \text{where}\qquad \bm W &= \operatorname{diag}(\bm w) = \begin{bmatrix} w_1 & {} & {} & {} \\ {} & w_2 & {} & {} \\ {} & {} & \ddots & {} \\ {} & {} & {} & w_n \end{bmatrix} \nonumber \end{align}

Since the model is only two dimensional, the matrix inversion of the $2 \times 2$ matrix $\bm X^\top \bm W \bm X$ is tractable to expand element-by-element. Starting with this quadratic term, the elements are:

\begin{align*} \begin{aligned} \bm X^\top \bm W \bm X &= \begin{bmatrix} \sum_{i=1}^n w_i & \sum_{i=1}^n w_i x_i \\ \sum_{i=1}^n w_i x_i & \sum_{i=1}^n w_i x_i^2 \end{bmatrix} \\ \bm X^\top \bm W \bm X &= \begin{bmatrix} W_n & \delta_n \\ \delta_n & \tau_n \end{bmatrix} \end{aligned} \qquad\qquad & \text{where } \left\{ \begin{aligned} W_n &\equiv \sum_{i=1}^n w_i = W_{n-1} + w_n \\ \delta_n &\equiv \sum_{i=1}^n w_i x_i = \delta_{n-1} + w_n x_n \\ \tau_n &\equiv \sum_{i=1}^n w_i x_i^2 = \tau_{n-1} + w_n x_n^2 \end{aligned} \right. \end{align*}

Inverting the matrix in terms of the defined scalar accumulators,

\begin{align*} \left( \bm X^\top \bm W \bm X \right)^{-1} &= \frac{1}{W_n \tau_n - \delta_n^2} \begin{bmatrix} \tau_n & -\delta_n \\ -\delta_n & W_n \end{bmatrix} \end{align*}

Similarly, the component-wise expansion of the latter term in the closed-form solution is:

\begin{align*} \bm X^\top \bm W \bm y &= \begin{bmatrix} \sum_{i=1}^n w_i y_i \\ \sum_{i=1}^n w_i x_i y_i \end{bmatrix} \\ \bm X^\top \bm W \bm y &= \begin{bmatrix} \alpha_n \\ \chi_n \end{bmatrix} & \text{where } \left\{ \begin{aligned} \alpha_n &\equiv \sum_{i=1}^n w_i y_i = \alpha_{n-1} + w_n y_n \\ \chi_n &\equiv \sum_{i=1}^n w_i x_i y_i = \chi_{n-1} + w_n x_n y_n \end{aligned} \right. \end{align*}

Multiplying the two components together, we obtain expressions for the slope $m$ and intercept $b$ of the least-squares solution in terms of quantities which are amenable to online updates:

\begin{align} \left. \begin{aligned} m &= \frac{W_n \chi_n - \alpha_n \delta_n}{W_n \tau_n - \delta_n^2} \\ b &= \frac{\alpha_n \tau_n - \delta_n \chi_n}{W_n \tau_n - \delta_n^2} \end{aligned} \right\} \qquad\text{where}\qquad \left\{ \begin{aligned} \alpha_n &= \alpha_{n-1} + w_n y_n \\ \chi_n &= \chi_{n-1} + w_n x_n y_n \\ \delta_n &= \delta_{n-1} + w_n x_n \\ \tau_n &= \tau_{n-1} + w_n x_n^2 \end{aligned} \right. \end{align}

Inspection of these quantities should show several nearly-recognizable terms. For instance, relatively obvious is that $\mu_n = \delta_n / W_n$, but more subtly you may also see that $W_n \tau_n - \delta_n^2 = W_n^2 \sigma_n^2$.

Motivated both by these familiar terms as well as the desire to consider re-writing the accumulation quantities in terms of the ratios $w_n/W_n$ (in alignment with how the mean and variance are already being defined), divide through the numerator and denominators of each fraction by $W_n^2$ and update the definitions of the accumulators:

\begin{align} \left. \begin{aligned} m &= \frac{\chi'_n - \mu_{x,n} \mu_{y,n}}{\sigma_n^2} \\ b &= \frac{\mu_{y,n} \tau'_n - \mu_{x,n} \chi'_n}{\sigma_n^2} \end{aligned} \right\} \qquad\text{where}\qquad \left\{ \begin{aligned} \mu_{y,n} &= \left(1 - \frac{w_n}{W_n}\right) \mu_{y,n-1} + \frac{w_n}{W_n} y_n \\ \mu_{x,n} &= \left(1 - \frac{w_n}{W_n}\right) \mu_{x,n-1} + \frac{w_n}{W_n} x_n \\ \chi'_n &= \left(1 - \frac{w_n}{W_n}\right) \chi'_{n-1} + \frac{w_n}{W_n} x_n y_n \\ \tau'_n &= \left(1 - \frac{w_n}{W_n}\right) \tau'_{n-1} + \frac{w_n}{W_n} x_n^2 \\ \sigma_n^2 &= \left(1 - \frac{w_n}{W_n}\right) \sigma_{n-1}^2 + \frac{w_n}{W_n} (x_n - \mu_{x,n-1})(x_n - \mu_{x,n}) \end{aligned} \right. \end{align}

where we have introduced/reused new names for $\alpha_n/W_n \rightarrow \mu_{y,n}$ and $\delta_n/W_n \rightarrow \mu_{x,n}$ to explicitly recognize the fact that they calculate the mean of each coordinate and added primes to $\chi_n’$ and $\tau_n’$ to make it clear that they’ve been rescaled from their unprimed counterparts.

For the same reason as argued in the calculation of the variance, using a variation of Welford’s algorithm to accumulate $\sigma_n^2$ is numerically more accurate than the difference of quantities and therefore is preferred, but it comes at the apparent cost of requiring that yet another quantity be tracked. Thankfully, we can eliminate the accumulation of $\tau’_n$ based on the fact that it is related to the mean and variance as:

\begin{align*} W_n \tau_n - \delta_n^2 = W_n^2 \sigma_n^2 \qquad\Longrightarrow\qquad \tau_n' - \mu_{x,n}^2 &= \sigma_n^2 \end{align*}

The slightly non-obvious transformation1 to

\begin{align*} \frac{\tau_n'}{\sigma_n^2} &= 1 + \frac{\mu_{x,n}^2}{\sigma_n^2} \end{align*}

lends itself to inserting into the expression for $b$ and simplifying:

\begin{align*} b &= \mu_{y,n} \left( 1 + \frac{\mu_{x,n}^2}{\sigma_n^2} \right) - \mu_{x,n} \frac{\chi'_n}{\sigma_n^2} \\ {} &= \mu_{y,n} - \mu_{x,n} \left( \frac{\chi'_n - \mu_{x,n}\mu_{y,n}}{\sigma_n^2} \right) \\ b &= \mu_{y,n} - m \mu_{x,n} \end{align*}

which eliminates any need for the keeping track of $\tau_n’$.

Continuing with the idea of Welford’s algorithm, the expression $\chi_n’ - \mu_{x,n} \mu_{y,n}$ may be recognizable as calculating the covariance between $x$ and $y$. Another variation of Welford’s algorithm applies to the calculation of the covariance between two quantities (see the appendix), so we substitute the covariance between $x$ and $y$, $\rho^2$, into the numerator of the expression for the slope.

Finally, we find that the online calculation of the slope and intercept over a data series is given by the set of equations

\begin{align} \left. \boxed{ \begin{aligned} m &= \frac{\rho_n^2}{\sigma_n^2} \\ b &= \mu_{y,n} - m \mu_{x,n} \end{aligned} } \quad \right\} \qquad\text{where}\qquad \left\{ \quad \boxed{ \begin{aligned} \mu_{y,n} &= \left(1 - \frac{w_n}{W_n}\right) \mu_{y,n-1} + \frac{w_n}{W_n} y_n \\ \mu_{x,n} &= \left(1 - \frac{w_n}{W_n}\right) \mu_{x,n-1} + \frac{w_n}{W_n} x_n \\ \rho_n^2 &= \left(1 - \frac{w_n}{W_n}\right) \rho_{n-1}^2 + \frac{w_n}{W_n} (x_n - \mu_{x,n-1})(y_n - \mu_{y,n}) \\ \sigma_n^2 &= \left(1 - \frac{w_n}{W_n}\right) \sigma_{n-1}^2 + \frac{w_n}{W_n} (x_n - \mu_{x,n-1})(x_n - \mu_{x,n}) \end{aligned} } \right. \end{align}

Weighting Schemes

In the following subsections are the definitions for a couple of different weighting schemes. Before discussing particular weighting schemes, though, there are several properties and relationships that we can derive generally.

Definitions

Let us define the weight fraction $\omega_i \equiv w_n / W_n$ as the ratio of the weight factor $w_i$ and the total weight $W_n$. Since every statistical update above is defined in terms of the weight fraction, we will take that as the only known/given quantity, and then our task is to derive a self-consistent set of definitions for the weight factors and total weight.

We begin by writing the total weight in terms of the weight fractions. Start by expanding the total weights by one term and substituting to eliminate the unknown $w_n$ that arrises:

\begin{align*} W_n = w_n + W_{n-1} \\ W_n = \omega_n W_n + W_{n-1} \\ \end{align*}

Then simply solving for $W_n$ in terms of just $\omega$ and $W$:

\begin{align*} (1 - \omega_n) W_n &= W_{n-1} \\ W_n &= \frac{1}{1 - \omega_n} W_{n-1} \end{align*}

This is a simple recursive definition, where the only subtlety to consider is that there’s no predecessor to $W_1$.

To resolve this definitional problem, let us assume that $W_1 = 1$. By definition of the total weight $W_1 = \sum_{i=1}^1 w_i = w_1$, we constrain to $w_1 = 1$, and therefore $\omega_1 = 1$ as well. Note that we’ve just argued that no matter the weighting scheme, we will have $\boxed{\omega_1 = w_1 = W_1 = 1}$. We can justify this choice by considering (as an example) the update to the online mean. The “previous” mean is undefined, but since it is multipled by the factor $(1 - \omega_1) = 0$, the first “update” is operationally equivalent to an assignment of the mean to the value of the first data sample. If you examine every one of the updates defined above, you should see that special casing the first weight factor to unity effectively initializes every accumulation quantity.

Returning to the total weight recursion, we can now pretty easily conclude that the explicit formula for $W_n$ is

\begin{align} \Aboxed{ W_n &= \prod_{i=2}^n \frac{1}{1 - \omega_i} } & \text{for } n > 1 \end{align}

From this, it directly follows that the traditional weight factors are given by

\begin{align} \Aboxed{ w_n &= \omega_n W_n = \omega_n \prod_{i=2}^n \frac{1}{1 - \omega_i} } & \text{for } n > 1 \end{align}

Effective sample size

The effective sample size $n_\mathrm{eff}$ of a weighting scheme (given one particular definition) can be defined as a function of only the weights:

\begin{align} n_\mathrm{eff} &\equiv \frac{\left( \sum_{i=1}^n w_i \right)^2}{\sum_{i=1}^n w_i^2} \end{align}

We’ve already derived the expression for the sum of weights $W_n$ above, so we turn to the sum of squared weights $Q_n$. Using the definition of weight factors $w_i$ in terms of the weight fractions $\omega_i$,

\begin{align} Q_n &\equiv \sum_{i=1}^n w_i^2 \nonumber \\ {} &= \sum_{i=1}^n \left( \omega_i \prod_{j=2}^i \frac{1}{1 - \omega_j} \right)^2 \nonumber \\ \Aboxed{ Q_n &= 1 + \sum_{i=2}^n \left\{ \omega_i^2 \prod_{j=2}^i \left(\frac{1}{1 - \omega_j}\right)^2 \right\} } \end{align}

While mathematically correct, this expression is non-ideal from a numerical standpoint. First, the repeated inner products would (naively) mean performing the same calculation multiple times (i.e. for $j = 3$, the product calculates $X_2 \cdot X_3$, and then for $j = 4$, we caculate $X_2 \cdot X_3 \cdot X_4$, and then …). Second, consider the case where $\omega_j$ is very small such that $\left(\frac{1}{1-\omega_j}\right) \sim 1 + \epsilon$ — the repeated products between terms imply sums involving contributions from terms of order $\mathcal{O}(\epsilon^{2i})$, which is challenging to do accurately with finite floating point arithmetic.

For both of these reasons, let us expand the summation and products for a few terms and look for a pattern:

\begin{align*} Q_n &= 1 + \sum_{i=2}^n \left\{ \omega_i^2 \prod_{j=2}^i \left(\frac{1}{1 - \omega_j}\right)^2 \right\} \\ {} &= \begin{aligned}[t] 1 & + \omega_2^2\left(\frac{1}{1-\omega_2}\right)^2 + \omega_3^2\left(\frac{1}{1-\omega_2}\right)^2 \left(\frac{1}{1-\omega_3}\right)^2 \\ & + \omega_4^2\left(\frac{1}{1-\omega_2}\right)^2 \left(\frac{1}{1-\omega_3}\right)^2 \left(\frac{1}{1-\omega_4}\right)^2 + \ldots \end{aligned} \end{align*}

Notice that except for the first term, the factor of $\left(\frac{1}{1-\omega_2}\right)^2$ appears in every later term. Factoring this out,

\begin{align*} Q_n &= 1 + \left(\frac{1}{1-\omega_2}\right)^2 \left[ \omega_2^2 + \omega_3^2 \left(\frac{1}{1-\omega_3}\right)^2 + \omega_4^2 \left(\frac{1}{1-\omega_3}\right)^2 \left(\frac{1}{1-\omega_4}\right)^2 + \ldots \right] \end{align*}

The term inside the square brackets now has single initial factor that stands on its own, but then in every subsequent term there’s a factor of $\left(\frac{1}{1-\omega_3}\right)^2$ which can also be factor out. Repeating the pattern:

\begin{align*} Q_n &= 1 + \left(\frac{1}{1-\omega_2}\right)^2 \left[ \omega_2^2 + \left(\frac{1}{1-\omega_3}\right)^2 \left\{ \omega_3^2 + \left(\frac{1}{1-\omega_4}\right)^2 \bigg( \omega_4^2 + \ldots \bigg) \right\} \right] \end{align*}

Finally, to help make the pattern a little more complete, let’s recall that the initial $1$ comes from the first term where $w_1 = \omega_1 = 1$, so we can substitute back in $1 \rightarrow \omega_1^2$ for it to get the nested pattern:

\begin{align*} Q_n &= \omega_1^2 + \left(\frac{1}{1-\omega_2}\right)^2 \left[ \omega_2^2 + \left(\frac{1}{1-\omega_3}\right)^2 \left\{ \omega_3^2 + \left(\frac{1}{1-\omega_4}\right)^2 \bigg( \omega_4^2 + \ldots \bigg) \right\} \right] \end{align*}

Starting at the innermost bracket, the next bracket outward is a simple multiply-add of the previous quantity with terms that depend only on either $\omega_i$ or $\omega_{i+1}$. This is a recursive process:

\begin{align*} Q_n &= \underbrace{ \omega_1^2 + \left(\frac{1}{1-\omega_2}\right)^2 \underbrace{ \Bigg[ \omega_2^2 + \left(\frac{1}{1-\omega_3}\right)^2 \underbrace{ \Bigg\{ \omega_3^2 + \left(\frac{1}{1-\omega_4}\right)^2 \bigg( \underbrace{\omega_4^2 + \ldots}_{Q_n^{(4)}} \bigg) \Bigg\} }_{Q_n^{(3)}} \Bigg] }_{Q_n^{(2)}} }_{Q_n^{(1)}} \end{align*}

where we’ve introduced new quantities $Q_n^{(i)}$ which denote the partial sum of $Q_n$ at step $(i)$. Using this new notation, we have:

\begin{align*} Q_n &= Q_n^{(1)} & Q_n^{(i)} &= \omega_i^2 + \left(\frac{1}{1-\omega_{i+1}}\right)^2 Q_n^{(i+1)} \end{align*}

The only remaining quantity that needs to be specified is the initial condition $Q_n^{(n)}$. It should be clear from the finite expansion we performed above, that if $n = 4$, there are no remaining terms that are being implied by the $[\ldots]$, so $Q_4^{(4)} = \omega_4^2$. In general, we then conclude that $Q_n^{(n)} = w_n^2$, giving us the complete recursive definition of the sum of squared weights:

\begin{align} \boxed{ \begin{aligned} Q_n &\equiv \sum_{i=1}^n w_i^2 = Q_n^{(1)} & Q_n^{(n)} &= \omega_n^2 \qquad\text{(initial condition)} \\ {} & {} & Q_n^{(i)} &= \omega_i^2 + \left(\frac{1}{1-\omega_{i+1}}\right)^2 Q_n^{(i+1)} \end{aligned} } \end{align}

Therefore, the generic method for calculating the effective sample size — agnostic to the particular weighting scheme in use — is to calculate the sum of weights $W_n$ and sum of squared weights $Q_n$ as defined above, an then the effective sample size is simply

\begin{align} \boxed{ n_\mathrm{eff} = \frac{W_n^2}{Q_n} } \end{align}

Uniform Weights

Definitions

Uniform weights are the implicit weights seen in the “unweighted” case.

On the surface, there’s an ambiguity in the ratio $\omega_n = w_n / W_n$ as to whether it should be interpreted as the total weight always being unity and every data point “reweights” when the next data point is observed to the uniform value $1/n$ or whether every weight factor is $1$ and the sum increases. Both definitions have their benefits, but the expressions derived above resolve this ambiguity (i.e. there’s only one self-consistent interpretation given the other assumptions we’ve made).

The total weight is

\begin{align*} W_n &= \prod_{i=2}^n \frac{1}{1 - \frac{1}{i}} \\ {} &= \prod_{i=2}^n \frac{i}{i - 1} \\ \Aboxed{ W_n &= n } \end{align*}

where the last line is true because the numerator and denominator of each successive term cancel each other out except for the first denominator ($i - 1 = 2 - 1 = 1$) and the last numerator ($i = n$). It follows trivially that the weight factors are then

\begin{align*} w_n &= \frac{1}{n} \cdot n \\ \Aboxed{ w_n &= 1 } \end{align*}

Effective Sample Size

The effective sample size for uniform weights are trivial since each weight factor is $1$. The sum of squared weights is $\boxed{Q_n = n}$, so $W_n^2/Q_n = \boxed{n_\mathrm{eff} = n}$ exactly as expected.

Exponential Weights

Qualitatively, the constant weight fraction results in an exponential weight distribution since the newest data point always contributes a fixed fraction of the total weight, which necessarily means that prior data points are repeatedly “de-weighted” (relative to the total weight) by a fixed ratio as well. The first data point undergoes this de-weighting the most number of times, and the relative de-weighting will occur fewer times for successively newer data points. This describes a power series of increasing relative weights.

Analytically, the total weight and weight factors according to the expression above are:

\begin{align*} W_n &= \prod_{i=2}^n \frac{1}{1 - \lambda} & w_n &= \lambda W_n \\ \Aboxed{ W_n &= \left( \frac{1}{1 - \lambda} \right)^{n - 1} } & \Aboxed{ w_n &= \lambda \left( \frac{1}{1 - \lambda} \right)^{n - 1} } \end{align*}

for $n > 1$.

Effective Sample Size

Rather than using the generic, recursive definition for the sum of squared weights $Q_n$, we can specialize given the known weight factors. Explicitly,

\begin{align*} Q_n = \sum_{i=1}^n w_i^2 &= 1 + \sum_{i=2}^n \lambda^2 \left( \frac{1}{1 - \lambda} \right)^{2(i-1)} \end{align*}

Formally, the summation is a geometric series, which is easier to see by making the convenient substitutions letting $\gamma \equiv 1 - \lambda$ and $j = i - 1$:

\begin{align*} Q_n &= 1 + (1 - \gamma)^2 \cdot \sum_{j=1}^{n-1} \left( \frac{1}{\gamma^2} \right)^j \\ &= 1 + \left(\frac{1 - \gamma}{\gamma} \right)^2 \cdot \sum_{j=1}^{n-1} \left( \frac{1}{\gamma^2} \right)^{j-1} \end{align*}

where the second line follows by pulling out one factor of $1/\gamma^2$ from the summation in order to decrease the power inside the summation by 1. This puts the summation in the form of a finite geometric series, which has an well known closed-form expression for the partial sum, so that

\begin{align*} &= 1 + \left(\frac{1 - \gamma}{\gamma} \right)^2 \frac{1 - \gamma^{-2(n-1)}}{1 - \gamma^{-2}} & \text{note}\;& \frac{1}{1 - 1/\gamma^2} = \frac{\gamma^2}{\gamma^2 - 1} \\ Q_n &= 1 + \frac{1 - \gamma}{1 + \gamma} \left( \gamma^{-2(n-1)} - 1 \right) \end{align*}

Substituting back in the definition of $\gamma = 1 - \lambda$, we get that the sum of squared weights for exponential weighting is given by:

\begin{align*} \Aboxed{ Q_n &= 1 + \frac{\lambda}{2 - \lambda} \left( (1 - \lambda)^{-2(n-1)} - 1 \right) } \end{align*}

Combining with the known expression for $W_n$, the effective sample size $n_\mathrm{eff}$ is

\begin{align*} n_\mathrm{eff} &= \left(1 - \lambda \right)^{-2(n-1)} \left[ 1 + \frac{\lambda}{2 - \lambda} \left( (1 - \lambda)^{-2(n-1)} - 1 \right) \right]^{-1} \\ {} &= \left[ \left( 1 - \frac{\lambda}{2 - \lambda} \right) \left(1 - \lambda \right)^{2(n-1)} + \frac{\lambda}{2 - \lambda} \right]^{-1} \\ &= \left[ \frac{2}{2 - \lambda} \left(1 - \lambda \right)^{2n-1} + \frac{\lambda}{2 - \lambda} \right]^{-1} \\ \Aboxed{ n_\mathrm{eff} &= \frac{2-\lambda}{\lambda + 2(1-\lambda)^{2n-1}} } \end{align*}

Scaling calculations for $\lambda$

To define the value of $\lambda$, a useful definition can be to consider the “time” at which the $k$-th prior weight decays to some fraction that of the latest weight, i.e. $\frac{w_{n}}{w_{n - k}} = \epsilon$ (where $0 < \epsilon < 1$). Solving for $\lambda$ as a function of $k$ yields:

\begin{align*} \lambda &= 1 - e^{-\ln \epsilon / k} \end{align*}

The two most common values of $\epsilon$ are:

  1. $\epsilon = \frac{1}{2}$ corresponds to defining the “half life” $k = k_\mathrm{half}$. Then $\lambda = 1 - e^{-\ln 2 / k_\mathrm{half}}$.

  2. $\epsilon = \frac{1}{e}$ correspond to defining the “time constant” $k = \kappa$. Then $\lambda = 1 - e^{-1/\kappa}$.

The half life and time constant are related by $k_\mathrm{half} = \kappa \ln 2$.

The effective sample size calculation gives another way to define $\lambda$. Consider the limit of $n_\mathrm{eff}$ as $n \rightarrow \infty$:

\begin{align} \lim_{n\rightarrow\infty} n_\mathrm{eff} &= \frac{2-\lambda}{\lambda + 2\tozero{(1-\lambda)^{2n-1}}} \\ {} &= \frac{2}{\lambda} - 1 \end{align}

We see that the effective sample size asymptotically approaches a constant defined in terms of only the weight fraction $\lambda$. Therefore, we can reverse the relationship and solve for a weight fraction that results in a particular effective sample size, $\lambda = \frac{2}{1 + n_\mathrm{eff}}$.


Welford’s Algorithm

The sum of weighted squared deviations $\beta_n$ is defined as

\begin{align*} \beta_n \equiv \sum_{i=1}^n w_i (x_i - \mu_n)^2 \end{align*}

To calculate the quantity as an online update from $\beta_{n-1}$, the naive approach looks like the following2 where the square is expanded, terms are regrouped, and the single terms can be identified and accumulated.

\begin{align*} \beta_n &= \sum_{i=1}^n w_i (x_i - \mu_n)^2 \\ {} &= \sum_{i=1}^n w_i \left( x_i^2 - 2x_i\mu_n + \mu_n^2 \right) \\ {} &= \left( \sum_{i=1}^n w_i x_i^2 \right) - 2\mu_n \left( \sum_{i=1}^n w_i x_i \right) + \mu_n^2 \left( \sum_{i=1}^n w_i \right) \\ {} &= \left( \sum_{i=1}^n w_i x_i^2 \right) - \mu_n^2 W_n \\ \beta_n &= \tau_n - \mu_n^2 W_n \qquad\qquad\text{where } \tau_n \equiv \sum_{i=1}^n w_i x_i^2 = \tau_{n-1} + w_n x_n^2 \end{align*}

where $\mu_n$ and $W_n$ are updates as derived above, and we’ve introduced a new quantity to track the sum of weighted squares $\tau_n$. The problem is that if the quantities $\tau_n$ and $W_n \mu_n$ have similar magnitude, the difference can suffer from catastrophic cancellation and significantly lose precision.

Welford’s Algorithm describes an alternate update method which avoids the worst of the numerical issues. To derive the Welford-style update, instead start by considering the difference of successive sums of deviations,

\begin{align*} \beta_n - \beta_{n-1} &= \left[ \sum_{i=1}^n w_i \left( x_i - \mu_n \right)^2 \right] - \left[ \sum_{i=1}^{n-1} w_i \left( x_i - \mu_{n-1} \right)^2 \right] \end{align*}

Expanding the first summation to exclude the $i=n$ term and grouping the remaining summation with the second summation,

\begin{align*} \beta_n - \beta_{n-1} &= w_n \left(x_n - \mu_n\right)^2 + \sum_{i=1}^{n-1} w_i \left[ \left( x_i - \mu_n \right)^2 - \left( x_i - \mu_{n-1} \right)^2 \right] \end{align*}

The term in the square brackets is a difference of squares, so factoring as $a^2 - b^2 = (a - b)(a + b)$, the summation can be rewritten as

\begin{align*} \beta_n - \beta_{n-1} &= w_n \left(x_n - \mu_n\right)^2 + \sum_{i=1}^{n-1} w_i \left[ (\tozero{x_i} - \mu_n - \tozero{x_i} + \mu_{n-1}) (x_i - \mu_n + x_i - \mu_{n-1}) \right] \end{align*}

Then using the null identity $\sum_{i=1}^n w_i \left( x_i - \mu_n \right) = 0$, the second parenthesized group of terms trivially goes to zero:

\begin{align*} {} &= w_n \left(x_n - \mu_n\right)^2 + (\mu_{n-1} - \mu_n) \sum_{i=1}^{n-1} w_i \left[(x_i - \mu_n) + \tozero{(x_i - \mu_{n-1})} \right] \end{align*}

(The null identity is readily derived from the definition of the weighted mean.) For the first parenthesized group where the limit $n$ of the summation and the subscript $n-1$ on the mean $\mu$ differ, we must first manipulate the sum.

\begin{align*} 0 &= \sum_{i=1}^n w_i \left( x_i - \mu_n \right) \\ 0 &= \sum_{i=1}^{n-1} w_i \left(x_i - \mu_n \right) + w_n \left( x_n - \mu_n \right) \\ -w_n \left( x_n - \mu_n \right) &= \sum_{i=1}^{n-1} w_i \left(x_i - \mu_n \right) \end{align*}

Substituting this back into the expression for the differences of $\beta$, we finally arrive at

\begin{align*} \beta_n - \beta_{n-1} &= w_n \left(x_n - \mu_n\right)^2 - w_n (\mu_{n-1} - \mu_n) (x_n - \mu_n) \\ {} &= w_n (x_n - \mu_n) \left[ (x_n - \tozero{\mu_n}) - (\mu_{n-1} - \tozero{\mu_n}) \right] \\ \beta_n - \beta_{n-1} &= w_n (x_n - \mu_n) (x_n - \mu_{n-1}) \end{align*}

which gives us the completed derivation of the weighted version of Welford’s algorithm:

\begin{align} \Aboxed{ \beta_n = \beta_{n-1} + w_n (x_n - \mu_{n-1}) (x_n - \mu_n) } \end{align}

It is also interesting to note that the variance $\operatorname{var}(x)$ is a special case of the covariance $\operatorname{cov}(x, y)$ between to vectors $x$ and $y$. Therefore, a useful ansatz is that we can generalize Welford’s algorithm for the variance to also work for calculating the online covariance $\rho_n^2$ of two series ${x_i}$ and ${y_i}$ as:

\begin{align} \left. \begin{aligned} \Aboxed{ \gamma_n &= \gamma_{n-1} + w_n (x_n - \mu_{x,n-1}) (y_n - \mu_{y,n}) } \\ {} &= \gamma_{n-1} + w_n (x_n - \mu_{x,n}) (y_n - \mu_{y,n-1}) \end{aligned} \right\} \qquad\text{where}\qquad \left\{ \begin{aligned} \gamma_n &\equiv \sum_{i=1}^n w_i (x_i - \mu_{x,n}) (y_i - \mu_{y,n}) \\ \rho_n^2 &\equiv \frac{\gamma_n}{W_n} \end{aligned} \right. \end{align}

where the mean $\mu$ has gained subscripts to differentiate the two distinct means. On the left, there’s an apparent asymmetry in whether $x_n$ or $y_n$ is paired with its $n$-th or $(n-1)$-th mean, but it is easy to show that $z_n - \mu_z = (1 - w_n/W_n) (z_n - \mu_{n-1})$, so both the first and second line are equivalent. A full proof that the generalization is valid is given by Shubert & Gertz (2018)3.

Online Least Squares via QR decomposition

A well-known substitute for directly solving for the regression coefficients via the normal equation

\begin{align*} \bm a = \left(\bm X^\top \bm X\right)^{-1} \bm X^\top \bm y \end{align*}

which involves an explicit matrix inversion is to instead QR decompose the regressor $\bm X$ and instead solve the system of equations

\begin{align*} \bm R \bm a &= \bm Q^\top \bm y &\text{i.e. } \bm a = \bm R {} \setminus \left(\bm Q^\top \bm y\right) \end{align*}

where the backslash denotes solving the system of equations without explicitly inverting $\bm R$. For a detailed derivation, see Appendix D.3 of my thesis4.

For use in online statistics, we must explicitly construct the QR decomposition (rather than relying on a numerical linear algebra library as typically happens). One method (and the easiest for our use here) of calculating the QR decomposition is via successive rounds of Gram-Schmidt orthogonalization. The first column of $\bm Q$, $\bm q_1$, is simply the first column of $\bm X$, $\bm x_1$, normalized to unit length:

\begin{align*} \bm q_1 &\equiv \frac{\bm v_1}{\| \bm v_1 \|} & \text{where } \bm v_1 &= \bm x_1 \end{align*}

The second column $\bm q_2$ is constructed by orthogonalizing $\bm x_2$ by removing its component which is parallel to $\bm q_1$ and normalizing:

\begin{align*} \bm q_2 &\equiv \frac{\bm v_2}{\| \bm v_2 \|} & \text{where } \bm v_2 &= \bm x_2 - \left( \bm q_1^\top \bm x_2 \right) \bm q_1 \end{align*}

A third column would be constructed similarly — projecting out the components of both $\bm q_1$ and $\bm q_2$ which are present in $\bm x_3$ and normalizing — but since we are interested specifically in regressing a degree-1 polynomial with only two coefficients, just the first two columns will suffice.

These together implicitly define $\bm Q = \begin{bmatrix} \bm q_1 & \bm q_2 \end{bmatrix}$. The partnering $\bm R$ matrix is a record of the transformations performed to turn the column basis $\bm x_i$ into $\bm q_i$. Namely for the $2 \times 2$ case,

\begin{align*} \bm R = \begin{bmatrix} \| \bm v_1 \| & \bm q_1^\top \bm x_2 \\ 0 & \| \bm v_2 \| \end{bmatrix} \end{align*}

which can be verified by performing the block-multiplication $\bm Q \bm R$ and observing that the products recover the column vectors $\bm x_i$.

For the specific case of a weighted degree-1 polynomial regression, there is an extra subtlety: the regressor $\bm X \rightarrow \bm X’ = {\bm W}^{1/2} \bm X$ and the ordinate vector $\bm y \rightarrow \bm y’ = {\bm W}^{1/2} \bm y$ — where the square root is applied elementwise to the diagonals of the weight matrix — to account for the weight factors. Let $(\bm w’)_{i} = \sqrt{(\bm w)_i}$ be the vector of the elementwise square roots of the weight vector.

With the weights in mind, we start with the first column of $\bm X’$ which is simply the square-root weight vector $\bm w’$. Its norm $\| \bm w' \| = \sqrt{W_n}$, so we easily find

\begin{align*} \bm q_1 &= \frac{1}{\sqrt{W_n}} \bm w' &\text{and}\qquad \| \bm v_1 \| &= \sqrt{W_n} \end{align*}

Next, the second column of $\bm X’$ is the vector $\begin{bmatrix} w'_1 x_1 & w'_2 x_2 & \ldots & w'_n x_n\end{bmatrix}^\top$, so

\begin{align*} \bm q_1^\top \bm x'_2 &= \frac{1}{\sqrt{W_n}} \sum_{i=1}^n (w'_i)^2 x_i \\ {} &= \frac{1}{\sqrt{W_n}} \sum_{i=1}^n w_i x_i = \sqrt{W_n} \mu_{x,n} \end{align*}

where we first recognize $(w’_i)^2 = w_i$ and then identify the near match to the definition of the weighted mean over values $x_i$.

Projecting out the component of $\bm q_1$ in $\bm x’_2$,

\begin{align*} \bm v_2 &= \bm x'_2 - \left(\sqrt{W_n} \mu_{x,n}\right) \bm q_1 \\ {} &= \bm x'_2 - \mu_{x,n} \bm w' \\ \bm v_2 &= \begin{bmatrix} w'_1 \left( x_1 - \mu_{x,n} \right) \\ w'_2 \left( x_2 - \mu_{x,n} \right) \\ \vdots \\ w'_n \left( x_n - \mu_{x,n} \right) \\ \end{bmatrix} \end{align*}

and the norm $\| \bm v_2 \|$ should now be recognizable as containing the sum of weighted squared deviations of $\bm x$, so we directly jump to using Welford’s algorithm on the quantity to declare

\begin{align*} \| \bm v_2 \| &= \sqrt{\beta_n} \end{align*}

Therefore,

\begin{align*} \bm q_2 &= \frac{1}{\sqrt{\beta_n}} \begin{bmatrix} w'_1 \left( x_1 - \mu_{x,n} \right) \\ w'_2 \left( x_2 - \mu_{x,n} \right) \\ \vdots \\ w'_n \left( x_n - \mu_{x,n} \right) \\ \end{bmatrix} \end{align*}

With every component of the QR decomposition in hand, we can now start solving the system of equations. For the product $\bm z \equiv \bm Q^\top \bm y'$

\begin{align*} z_1 &= \bm q_1^\top \bm y' & z_2 &= \bm q_2^\top \bm y' \\ {} &= \frac{1}{\sqrt{W_n}} \sum_{i=1} (w'_i)^2 y_i & {} &= \frac{1}{\sqrt{\beta_n}} \sum_{i=1} (w'_i)^2 (x_i - \mu_{x,n}) y_i \\ z_1 &= \sqrt{W_n} \mu_{y,n} & {} &= \frac{1}{\sqrt{\beta_n}} \left[ \left( \sum_{i=1} w_i x_i y_i \right) - \mu_{x,n} \left( \sum_{i=1}^n w_i y_i \right) \right] \\ {} & {} & {} &= \frac{W_n}{\sqrt{\beta_n}} \left[ \chi'_n - \mu_{x,n}\mu_{y,n} \right] \\ {} & {} & z_2 &= \frac{W_n\rho_n^2}{\sqrt{\beta_n}} \end{align*}

where again we’ve recognized definitions of the weighted mean over values $y_i$ and covariance between $x_i$ and $y_i$ from the derivation in the main body.

Finally, we have every component ready and can solve the system of equations $\bm R \bm a = \bm z$:

\begin{align*} \begin{bmatrix} \sqrt{W_n} & \sqrt{W_n} \mu_{x,n} \\ 0 & \sqrt{\beta_n} \end{bmatrix} \begin{bmatrix} b \\ m \end{bmatrix} &= \begin{bmatrix} \sqrt{W_n} \mu_{y,n} \\ W_n \rho_n^2 / \sqrt{\beta_n} \end{bmatrix} \end{align*}

Using back-substitution, we start with the equation for $m$:

\begin{align*} \sqrt{\beta_n} m &= \frac{W_n \rho_n^2}{\sqrt{\beta_n}} \\ \Aboxed{ m &= \frac{\rho_n^2}{\sigma_n^2} } \end{align*}

where we recognize that $\beta_n / W_n = \sigma_n^2$. Then relying on $m$ being known, the equation for $b$ is

\begin{align*} \sqrt{W_n} b + \sqrt{W_n} \mu_{x,n} m &= \sqrt{W_n} \mu_{y,n} \\ \Aboxed{ b &= \mu_{y,n} - m\mu_{x,n} } \end{align*}

These are precisely what was derived in the main body without ever constructing an expression for $\tau_n$ and having to eliminate it.


  1. A way to more directly arrive at this particular choice of accumulation variables is to replace use of the normal equation with the back-solve method allowed by QR decomposition of the regressor matrix $\bm X$ (and proceeding to do the component-wise expansion of the solve). ↩︎

  2. This is effectively the derivation that shows that

    \begin{align*} \newcommand{\bbE}[1]{\mathbb{E}\left[#1\right]} \operatorname{var}(x) &= \bbE{(x - \mu_x)^2} \\ {} &= \bbE{x^2} - \left(\bbE{x}\right)^2 \end{align*}

    where $\bbE{\cdot}$ denotes the expectation value↩︎

  3. Erich Schubert and Michael Gertz. “Numerically Stable Parallel Computation of (Co-)Variance.” In Proceedings of the 30th International Conference on Scientific and Statistical Database Management, SSDBM ‘18. New York, NY, USA, 2018. Association for Computing Machinery. URL: https://dbs.ifi.uni-heidelberg.de/files/Team/eschubert/publications/SSDBM18-covariance-authorcopy.pdf, doi: 10.1145/3221269.3223036 ↩︎

  4. Justin Willmert. Constraining Inflationary B-modes with the Bɪᴄᴇᴘ/Keck Array Telescopes. PhD thesis, University of Minnesota, October 2019. URL: https://hdl.handle.net/11299/211821 ↩︎

  5. 2022 Oct 27 — The article has been updated to produce a self-consistent set of weight fractions, weight factors, and total weight. The “Weighting Schemes” section has gained a general derivation of how the weight factors and total weight can be derived from the weight fractions, and the initial condition of all weighting schemes has been clarified. These clarifications and corrections particularly impact the exponential case where there is no remaining ambiguity to how to defined the weight factors (and an explicit expression for the total weight has been added). ↩︎

  6. 2022 Nov 01 — Additional notes have been added to describe how the exponential weighting fraction $\lambda$ relates to decay rates, parameterized in terms of both half-life and the exponential time constant. ↩︎

  7. 2022 Dec 12 — Derivations of the effective sample size $n_\mathrm{eff}$ were added, both in the general case and specializations for each of the uniform and exponential weighting schemes. ↩︎