Standard Error vs Standard Deviation

Dear friends,

I’ having issues with reading 9 question 11, in level one. The question stipulates as below - “Basing your estimate of future-period monthly return parameters on the sample mean a nd standard deviation for the period January 1994 to December 1996, construct a 90 percent confidence interval for the monthly return on a large-cap blend fund. Assume fund returns are normally distributed.” There is a table that proceeds this with the relevant data. The solution uses the standard deviation instead of the standard error in the calculating the confidence interval - Despite this being a sample! I would, really, really appreciate if someone could elaborate on why they use standard deviation instead of standard error. As far as I can tell this is the opposite logic, compared to answers shown in later readings such as practice problem 3b, reading 10. That question is also for a normally distributed population, and you are required to calculate the confidence interval given population mean for a sample, however its calculated using the standard error. To me these questions are essentially the same but solved in a different manner.

Please help, I’m willing to offer my first born child as a token of appreication, ha… Please help…

Whether it’s a sample or not isn’t the relavant point. The relavant point is whether you’re constructing a confidence interval for a single observation, or the confidence interval for the mean of a number of observations. You use the standard deviation as your measure of spread-out-edness when constructing a confidence interval for a single observation; you use standard error as the measure when constructing a confidence interval for the mean of several obsrvations.

As they stipulated “the monthly return”, not “the average monthly return”, they’re talking about a confidence interval for a single observation (i.e., each month’s return): use standard deviation.

I have enough offspring already, but thank you for the kind offer.

Since the std error is the std deviation divided by root(n), if you have a single observation, n=1, and the std error equals the std deviation divided by root(1). So in the case of a single observation, std deviation = std error.

Unless I’m missing something (which is always a good possibility).

Fantastic! Thanks guys, I think I’m finally getting my head around this concept. If the confidence interval had been for an average number of observations, then would I should used the standard error as opposed to standard deviation…

Your help is been greatly appreciated…

My pleasure.

Thanks for the explanations everyone, I had the exact same question.

Can you also explain why for a 95% conf int for example, 95% of all observations lie in the interval u±2 std dev

while for a random variable that follows a normal distribution, 95% conf int is Xbar±1.96s?

Thanks in advance for your help.



So the difference lie in the fact that we either look at a normal distribution or at a standarized normal distribution (z-score).

Does that mean that by standardizing, we are loosing accuracy?


I’m not sure how that fixes anything.

He’s included the sample mean and the sample standard deviation, so it doesn’t matter whether it’s a standard normal distribution, or a normal distribution with _ any _ mean and _ any _ standard deviation; his interval was correct as written.

1.96 is the proper number to use in both cases. When people write 2 instead of 1.96, they’re just being sloppy.

1.96 is not two standard deviations away from the mean in every sample. Isn’t this why we convert a normal sample to a standard normal?

edit: nevermind, oversaw the s.

He wrote 1.96_ s _; I presume that by _ s _ he meant the sample standard deviation.

What do you think he meant?

well, as written in Elan,

For a random variable X that follows the normal distribution

the 95% confidence interval is xbar±1.96s

the 99% confidence interval is xbar±2.58s

The following probability statements can be made about normal distributions

approximately 95% of all observations lie in the interval u±2pop std dev

approximately 99% of all observations lie in the interval u±3pop std dev

The last two are just being sloppy. Perhaps they think that adding “approximately” is enough to cover the sloppiness.

The correct numbers in both cases are 1.96 and 2.58 respectively.

Thanks for your explanation, I see a bit better what is the difference between the two

However, with a standardized normal distribution, u=o and std dev=1

Then, xbar±1.96s =1,96



u±2pop std dev = 2

u±3pop std dev = 3

Is that right?

Thanks for your time

±1.96 standard deviations means 95% of all observations. Check the z-table.

Now I understand, thanks very much for your help!

From CFAI books

"One, two, and three standard deviation intervals are illustrated in Figure 6. The intervals indicated are easy to remember but are only approximate for the stated probabilities. More-precise intervals are μ ± 1.96σ for 95 percent of the observations and μ ± 2.58σ for 99 percent of the observations." (Institute 507) Institute, CFA. CFA Institute Level I 2014 Volume 1 Ethical and Professional Standards and Quantitative Methods. Wiley Global Finance, 2013-07-12. VitalBook file."