I’ having issues with reading 9 question 11, in level one. The question stipulates as below - “Basing your estimate of future-period monthly return parameters on the sample mean a nd standard deviation for the period January 1994 to December 1996, construct a 90 percent confidence interval for the monthly return on a large-cap blend fund. Assume fund returns are normally distributed.” There is a table that proceeds this with the relevant data. The solution uses the standard deviation instead of the standard error in the calculating the confidence interval - Despite this being a sample! I would, really, really appreciate if someone could elaborate on why they use standard deviation instead of standard error. As far as I can tell this is the opposite logic, compared to answers shown in later readings such as practice problem 3b, reading 10. That question is also for a normally distributed population, and you are required to calculate the confidence interval given population mean for a sample, however its calculated using the standard error. To me these questions are essentially the same but solved in a different manner.
Please help, I’m willing to offer my first born child as a token of appreication, ha… Please help…
Whether it’s a sample or not isn’t the relavant point. The relavant point is whether you’re constructing a confidence interval for a single observation, or the confidence interval for the mean of a number of observations. You use the standard deviation as your measure of spread-out-edness when constructing a confidence interval for a single observation; you use standard error as the measure when constructing a confidence interval for the mean of several obsrvations.
As they stipulated “the monthly return”, not “the average monthly return”, they’re talking about a confidence interval for a single observation (i.e., each month’s return): use standard deviation.
I have enough offspring already, but thank you for the kind offer.
Since the std error is the std deviation divided by root(n), if you have a single observation, n=1, and the std error equals the std deviation divided by root(1). So in the case of a single observation, std deviation = std error.
Unless I’m missing something (which is always a good possibility).
Fantastic! Thanks guys, I think I’m finally getting my head around this concept. If the confidence interval had been for an average number of observations, then would I should used the standard error as opposed to standard deviation…
He’s included the sample mean and the sample standard deviation, so it doesn’t matter whether it’s a standard normal distribution, or a normal distribution with _ any _ mean and _ any _ standard deviation; his interval was correct as written.
"One, two, and three standard deviation intervals are illustrated in Figure 6. The intervals indicated are easy to remember but are only approximate for the stated probabilities. More-precise intervals are μ ± 1.96σ for 95 percent of the observations and μ ± 2.58σ for 99 percent of the observations." (Institute 507) Institute, CFA. CFA Institute Level I 2014 Volume 1 Ethical and Professional Standards and Quantitative Methods. Wiley Global Finance, 2013-07-12. VitalBook file."