Confidence interval: standard error vs standard deviation

Hi guys!

When constructing a confidence interval, what determines whether we use the standard error or the standard deviation in the formula?

What both terms are, I just struggle to choose the correct one when answering questions. Any help would be appreciated!

Heathcliff

depends on the information they give you.

You always use the standard error to calculate confidence intervals. The standard error equals s/squareroot(n). In the standard error formula, s refers to the estimate of the population standard deviation. and n the sample size.

Standard error it is.

Always use the standard error…

Thanks for your responses guys!

Not to confuse anyone, but be careful on the exam: if they supply both a population standard deviation and a sample standard deviation, make sure to use the population standard deviation for calculating the standard error. Usually, you’ll never know the population standard deviation, but they may try to trip you up on the exam.

Hello,

On page 257 of the Schweser Book1, the formula uses the standard deviation to calculate confidence intervals for normal distributions. I’m not sure if perhaps the above question relates to distributions that are not normal (I haven’t gotten to the part of the books that deals with non-normal distributions).

I believe it is different. When you use standard error to calculate confidence interval, you are dealing with the distribution of the sample mean. The actual observations in the distribution are the sample means of the population. When you are using standard deviation you are simply dealing with a normal distribution, whatever the obsveration in the distribution might be. Anyone correct me if I am wrong.

^ is correct. You assume normal distribution for confidence intervals.