Quantitative

A sample of 100 recently hired employees shows an average starting salary of $50,000 and a standard deviation of $3,000. Assuming the population has a normal distribution, construct a 90% confidence interval for the starting salary of a recently hired employee. A) 50,000 +/- 1.65(300). B) 50,000 +/- 1.65(3,000). C) 30,000 +/- 1.65(5,000). D) 50,000 +/- 1.65(30,000). Your answer: A was incorrect. The correct answer was B) 50,000 +/- 1.65(3,000). 90% confidence interval is X ± 1.65s = 50,000 ± 1.65(3,000) = $45,050 to $54,950. Note that this is a confidence interval for a single observation, which we estimate as a number of standard deviations from the mean. If we were constructing a confidence interval for the population mean, we would need to use the standard error (3,000 / √100). Can you please explain?

50,000 is the average (center of bell curve distribution). 3000 is the standard deviation. 90% confidence interval states that 90% of the average starting salaries will be +/- 1.65(3000) standard deviations from the mean. So 90% of salaries will be +/- 1.65(3000) from the $50,000 average salary.

Why would you answer A? The standard deviation is 3,000. Other than that, it looks like you have the right idea.

And when do you calculate it with mean ± standard error of sample mean

And when do you calculate it with mean ± standard error of sample mean?

Shouldn’t it be 50,000 +/- 1.65 (3000/sqrt 100) which is A. Why it is not taking std error in calculation?

Why is A not the right answer ?

If you are constructing a CI for the AVERAGE starting salary divide by sqrt(n) If you are constructing a CI for a SINGLE starting salary do not divide by sqrt(n) edit so B is the right answer

Great, thanks Joey.