Investopedia defines standard error as the difference between
A) the true mean of the population
B) the mean of your sample
The formula in the CFA book states that standard error of the sample = sample standard deviation/(square root of sample size)
So as N (the sample size) goes up, the standard error goes down.
But in practice, is standard error literally supposed equal population mean minus sample mean? How can a calculation involving standard deviations be certain about averages?
Let’s use some real numbers here so I can make sense of this.
Our population is the city of Dalls, with a population of 1,200,000 people. We are measuring average height and standard deviation of the entire Dallas population. To help achieve this, we take a sample of 250 people in the city of Dallas and discover
A sample mean of 68 inches
A sample standard deviation of 3 inches
N=250 people
So standard error = 3/(square root of 250) = 3/15.811 = 0.1897 inches?
Does that imply that the difference between my sample mean and the true population mean = 0.1897 inches? How would I know if the true population mean is 68.1897 or 67.8103 inches? That sounds like a ridiculous conclusion because we’ll never know the population mean and standard deviation without measuring all 1,200,000 residents’ heights