Expectation, Covariance, and Optimal Estimators
Expected Value and Variance
can take a range of values. The distribution of outcomes is dependent on the underlying pdf . It has a mean value, also known as the expected value. The mean and expected value are denoted by and , respectively.
We can calculate the expected value by doing a weighted sum of the outcome of a random value and its corresponding probability:
Now we also want a measure of the spread -- how much the outcomes typically deviate from the expected value.
is the standard deviation and is the variance.
Covariance for
Now what if we have a multivariate random variable? Say, for instance, the position of a robot on a 2D Grid: .
The expected value for the vector can be found simply by taking the expected value element-wise:
Variance is actually trickier. The features are related in some way (that's why they're in a single vector) so taking the element-wise will leave valuable information out. We can categorize the relation between the two features into two groups:
- When is big, is big. When is small, is small.
- When is big, is big. When is small, is big.
But what do we mean by big or small? Well, the difference between the random variable and its expected value is a good measure! It's small if and big if .
Note that it's not possible to have this configuration:
- When is big, is big.
- When is small, is big.
If this were the case, it'd mean that is always big! This isn't possible because then the mean would be bigger and there would be more small values.
The relation between and is really what defines the covariance. It tells us a lot about the "shape" of the distribution. But how do we calculate it? Well, we only really care about whether or not the variables change together or not. It doesn't matter if they're both small or both big -- this is one group. We want a positive covariance if they're directly proportional and a negative if they're inversely proportional. Consider the formula for the expected value and it should feel quite intuitive:
If both values are small or big (negative * negative = positive), the value inside the expected value will be positive. If they're different, it will be negative. The expected value gives us a notion of the average result.
A nice property of covariance is that taking the covariance of a random variable with itself is simply the variance.
So for our multidimensional random variable, the covariance matrix would look like this:
And for a random vector with D dimensions:
This can actually be calculated using a matrix product as follows:
Feel free to verify this product yourself.
Properties of Expectation and Covariance
Linearity
We want to show:
First, note that, using marginalization: . With that, we can proceed with the derivation:
Note that, when I say , this is the covariance of each feature of the random variables with respect to the other features. It's the same case as I explored earlier with .
Estimation
Imagine is a hidden state which we can't observe. Since it's unobservable, we want an estimator . The estimator may be noisy (ideally not), but it definitely shouldn't have bias. An unbiased estimator refers to an estimator which, in the long run, averages out to the value of the hidden state. In other words, despite the noise, the estimator is centered around the hidden states value.
To put it mathematically, we want .
The error of our estimator is . The expected value of our estimator should be zero, because the bias is 0 and, on the average, the noise is displaced from the mean evenly on either side:
The covariance of the estimator is .
Combining Estimators
Oftentimes we have two separate estimators of a single hidden state: . How can we combine them to achieve an estimator which is better than either estimator on its own? Well, if we're looking for an optimal combination, we need some measure of optimality to optimize for. The trace of a matrix is the sum of its diagonals. In a covariance matrix, the diagonals are the variances for each of the features of the random variable. Since we really care more about the spread of the random variable, and not really the covariance between features, this seems like a good measurement for the overall noise level.
With that measure of optimality, we can formalize our task:
So, we assure the estimator is unbiased and then minimze the trace of the covariance.
Consider the example of a linear combination of two 1D Gaussians:
We want it to be unbiased:
We assume the two estimators are independent so the covariance is . For independent RV, the following identity applies: .
Now we want to minimize the variance w.r.t :
Solving for :
Replacing with the critical values we calculated:
This should intuitively make sense. If \hat X_1 has a lot of noise ( is big), will contribute more to the estimator.
Combining Estimators for multidimensional RV
Now we have and :
First, we must ensure the estimator is unbiased:
This condition will ensure the new estimator is unbiased. Onto the minimization of covariance:
Now taking the partial derivative w.r.t :
Solving for :
Rewriting our original expression for the estimator:
This looks really similar to the 1-D case and still aligns with intuition. If has high covariance, will be large and dominate in the new estimator.
This choice of and is actually optimal for all estimators given that the estimators, as the function we're optimizing is convex. I don't know that much about this, so I won't go into great detail. My understanding is that the convexity basically means optimizing the function will definitely give us a global minimum -- there aren't several local minima we could get stuck in.
The method of optimizing with respect to the trace of the covariance is foundational to the derivation for the Kalman Filter, so understanding this prerequisite is critical.