Confidence Interval

Schweser Study Session 3 page 293 #9: A random sample of 100 computer store customers spent an average of $75 at the store. Assuming the distribution is normal and the population standard deviation is $20, the 95% confidence interval for the population mean is closest to: ----- I understand how to calculate this answer, but I do not understand why you divide the standard deviation by 100^.5. In previous questions that seemed identical with the exception of not providing a sample size, the confidence interval was obtained by simply using the given standard deviation (in this case $20), yet in this scenario you have to divide the standard deviation by 100^.5. Any assistance on understanding this topic would be much appreciated.

I don’t have the Schweser notes but for a confidence interval of A(1-a)% = Point estimate +/- Reliability factor x standard error so the standard error has to account for the fact that the point estimate (in this case $75) is based on the size of the sample (n=100). Thus Standard error = std dev of population / n^0.5 And this is the reason that you have to divide by 100^0.5, not knowing what the previous question was i can’t help so much with that one. Hope it helps

el bumpo… JDV, can you please explain this?

std error of sample = std deviation of population / sqrt(n) when you try to estimate the confidence interval for a sample, and you are provided the population standard deviation. (where n = size of sample). sqrt (n) is also written as n to the power of 0.5 (or n^0.5).

Point taken, I will try to use more readily accepted forms of notation for further posts. Cheers

Looks ok to me…

I think OP was asking this: conceptually, why isn’t n needed to calculate z in this situation? In the following examples: “A random sample of 100 computer store customers spent an average of $75 at the store. Assuming the distribution is normal and the population standard deviation is $20, the 95% confidence interval for the population mean is closest to:” >>>>100@0.025=reliability factor=1.984 >>>20/sqrt(100)=standard error of the sample mean=2 >>75+|-1.984*2=C.I. >(71.03, 78.87) “A study of hedge fund investors found that their annual household incomes are normally distributed with a mean of $175k and a standard deviation of $25k. What percent of hedge fund investors have incomes less than $100k?” (n is not given) >>>(x-mean)/std=z >>(100-175)/25=-3 >-3~0.0013 Working backwards, a 95% confidence interval would be mean+|-1.96std >(126, 224) Why do you divide by the square root of n if n is given, but if n is not given, then you don’t have to divide by the square root of n? How does this work?

In the first example, you are estimating a mean using a C.I. You sample from the population of computer store customers, compute an average and say the average of the sample is approximately the population mean. Makes sense, intuitively, I hope. But it isn’t exact and the question is “how close is it”? The answer is that the C.I. decreases as the sample size gets bigger and it decreases at the rate 1/Sqrt(n). Some math could prove that, but you don’t need to know that. If you are estimating a population mean with a sample mean, use Sqrt(n) thingie. In the second example, you just have a distribution and are not trying to estimate a mean. You are just trying to work out a probability from a normal distribution. There is no estimation problem being done at all.

Thank you for the clarification. I now understand. Appreciate all the help.