Autoregressive Model Gross Margin – McDowell Manufacturing Quarterly Data: 1st Quarter 1985 to 4th Quarter 2000 Regression Statistics R-squared 0.767 Standard Error 0.049 Observations 64 Durbin-Watson 1.923 (not statistically significant) Partial List of Recent Observations (AR(1) with a seasonal lag) 1st Quarter 2003 0.26 4th Quarter 2003 0.24 What is the 95 percent confidence interval for the sales in the first quarter of 2004? A) 0.158 to 0.354. B) 0.168 to 0.240. C) 0.197 to 0.305. D) 0.11 to 0.31. solution from schweser: The forecast for the following quarter is 0.155 + 0.240(0.240) + 0.168(0.260) = 0.256. Since the standard error is 0.049 and the corresponding t-statistic is 2, we can be 95% confident that sales will be within 0.256 – 2 × (0.049) and 0.256 + 2 × (0.049) or 0.158 to 0.354. ------this can’t be right, I mean the way they built the confidence interval, they used SEE from the first table (I’m assuming thats SEE included in the partial ANOVA output) to build a confidence interval for a forcast?!..shouldn’t it be the standard error of forcast instead (which was not covered) ------------------------------------------------------------------------------------------------------------------- (same background info as above) Which of the following can Le conclude from the regression? The time series process: A) includes a seasonality factor, has significant explanatory power, and is mean reverting. B) includes a seasonality factor and a unit root. C) includes a seasonality factor and has significant explanatory power. D) includes a unit root, has significant explanatory power, and is mean reverting. solution from schweser: …Both slope coefficients are significantly different from one: first lag coefficient: t = (1-0.24)/0.031 = 24.52 second lag coefficient: t = (1-0.168)/0.038 =21.89 Thus, the process does not contain a unit root, is stationary, and is mean reverting. The process has significant explanatory power since both slope coefficients are significant and the coefficient of determination is 0.767. -----this is even more absurd, they used t-test to test the significance of the slope coefficients different from one for unit root testing…wasn’t the whole premise of using DF test based on the fact that with uncertainty around the existence of an unit root, you cannot rely on the results of t-test on slope coefficients until the time series is proven to be stationary?..another one ---------------------------------------------------------------------------------------------------------------- Wireless Phone Minutes (WPM)t = bo + b1 WPMt-1 + ε t (28 observations) Coefficients Coefficient Standard Error of the Coefficient Intercept -8.0237 2.9023 WPM -1 1.0926 0.0673 Part 2)Is the time series of WPM covariance stationary? A) Yes, because the computed t-statistic for a slope of 1 is not significant. B) No, because the computed t-statistic for a slope of 1 is not significant. C) Yes, because the computed t-statistic for a slope of 1 is significant. D) No, because the computed t-statistic for a slope of 1 is significant. solution from schweser: The t-statistic for the test of the slope equal to 1 is computed by subtracting 1.0 from the coefficient, 1.3759 [= (1.0926 − 1.0) / 0.0673], which is not significant at the 5% level. The time series has a unit root and is not covariance stationary. ----any thoughts? perhaps I’m missing something here…
anyone?
Lotta reading there… 1) Some info is missing there (i.e., it looks like you have an AR(2) model and the times are mistyped?) but if the question is whether Schweser knows the difference between a confidence interval and a forecast interval, the answer is clear. They don’t. Not even in a simple regression model, to say nothing of time series. 2) Yep that’s not even close (separate t-tests on two variables to determine unit roots?!) 3) Similarly stupid as 2).
ya, I think the reading discouraged most people… 1)Ya, I cut out some of the info to make it shorter, the model is actually an AR(1) with a seasonal lag, so there’s t-1 and t-4. 2) and 3), good, it seems like I didn’t miss anything, but I just can’t believe how sloppy their solution is…
Wow, that is a bad batch. Where did these come from?
Schweser basically used same formular for multiple regression. Can you use it for autoregressive models? Just hit the book. Yes. Need to use Dickey and Fuller test for unit root test for nonstationarity.
damn i knew something was up with this damn question when they were testing the individual slopes.
"------this can’t be right, I mean the way they built the confidence interval, they used SEE from the first table (I’m assuming thats SEE included in the partial ANOVA output) to build a confidence interval for a forcast?!..shouldn’t it be the standard error of forcast instead (which was not covered) " was thinking the same thing. this problem blows…
man i thought joey was back
then i looked at the date