# Standard error

Why dont we divide by sqrt(1000)?? An airline was concerned about passengers arriving too late at the airport to allow for the additional security measures. Based on a survey of 1,000 passengers, the mean time from arrival at the airport to reaching the boarding gate was 1 hour, 20 minutes, with a standard deviation of 30 minutes. If the airline wants to make sure at the 95 percent confidence level that passengers have sufficient time to catch their flight, how much time ahead of their flight should passengers be advised to arrive at the airport? A) One hour, fifty minutes. B) Two hours, ten minutes. C) Two hours, thirty minutes. D) Two hours, forty-five minutes. Your answer: A was incorrect. The correct answer was B) Two hours, ten minutes. We can use standard distribution tables because the sample is so large. From a table of area under a normally distributed curve, the Z value corresponding to a 95 percent, one-tail test is: 1.65. (We use a one-tailed test because we are not concerned with passengers arriving too early, only arriving too late.) Here, we do not divide by the standard error, because we are interested in a point estimate of making our flight. The answer is One hour, twenty minutes + 1.65(30 minutes) = 2 hours,10 minutes.

Why would you divide by 1000? The 1000 is just to gauge what test to use. If it said 29 or below were surveyed, you would have had to use a t-test instead of a z-test, increasing the time to greater than 2hrs 20 minutes, as you are less certain.

I am confused too…Any Quant heads? Per above and as a general rule, when do you use standard error and when do you use Standard deviation in the calculations?