When constructing confidence intervals, how do you know to use the standard deviation or the standard error? For example, Construct a 90 percent confidence interval for the starting salaries of 100 recently hired employees with average starting salaries of $50,000 and a standard deviation of $3,000 assuming the population has a normal distribution. 50000 +/- 1.65(3000) 50000 +/- 1.65(300) Which one is right?
Do we use 50000 +/- 1.65(300) here … Since we are estimating the CI around the sample statistic, so need to account for the sampling error?
This is another Schweser wacky statistics question. A C.I. is an interval estimate of a parameter so you would always use the standard error, not the standard deviation. Unfortunately, in this question they aren’t really asking for a C.I. even though they call it that. A C.I. should say something like “Construct a 90% C.I. for the mean salary of new hires given that a sample had [blah, blah]”
thanks for the explanation joey. learned something new. C. I. = M ± (z * SE)
if you’re looking for the interval around a MEAN, use standard error if its the interval around a POINT, use standard deviation so in this case, since they’re looking for the interval around the MEAN of 50,000, you use standard error…
In this case, you use the standard deviation but that’s because the question is messed.
hmmm but even though they arent asking for the CI of the mean salary, that is sorta what they’re asking right? since they want the CI of starting salaries, of which the mean is 50,000… so i take it to mean that they are looking for the CI of the mean salary?