Chapter 7    Estimation



 

7.1    Unbiased Estimators and Variability

Def:    Let  p  be the value of some population parameter, and let statistic   be an estimator for  p  computed from a random sample. (Note that   is a random variable: its value depends on the particular sample selected.)  Then   is said to be an unbiased estimator of p if i.e., the expected value of the estimator is the value of the population parameter. In addition to having the values given by an estimator being centered around the true value of the population parameter it's estimating, we'd also like the values to have a narrow spread, i.e., we'd like them on average not to vary too far on either side of the expected value. To measure this, we'll look at the standard deviation (variance) of the estimator; this will tell us how far on average the values of the estimator will vary from the expected value.
 

The most important estimators are

Consider the expected value and standard deviation (variance) of these two estimators.
 
 

Sample mean

1.    Expected value (unbiasedness)  

2.    Variance (variability)

Look at the variance of  to see how widely the values of  can be expected to vary from sample to sample.

In fact: as n approaches infinity, the variance of  will approach 0, and thus the probability that the value given by  will differ from m goes to zero! This is known as the Law of Large Numbers.
 
 
 
 
 

Sample Variance

1.    Expected value (unbiasedness)
   

2.    Variance (variability)

Alas!  The expression for the variance of S2 depends on the type of distribution of the population!  However, under some general assupltions, it can be shown that the variance of S2 decreases as the size of the sample increases, as was the case for the sample mean. Thus the larger the sample, the more accurate the value given by the sample variance.



 
Previous section  Next section