An analyst is estimating whether a fund’s excess return for a month is dependent on interest rates and whether the S&P 500 has increased or decreased during the month. The analyst collects 90 monthly return premia (the return on the fund minus the return on the S&P 500 benchmark), 90 monthly interest rates, and 90 monthly S&P 500 index returns from July 1999 to December 2006. After estimating the regression equation, the analyst finds that the correlation between the regressions residuals from one period and the residuals from the previous period is 0.145 (DW=1.71). Which of the following is *most* accurate at a 0.05 level of significance, based solely on the information provided? The analyst:

A)

can conclude that the regression exhibits serial correlation, but cannot conclude that the regression exhibits heteroskedasticity.

B)

can conclude that the regression exhibits heteroskedasticity, but cannot conclude that the regression exhibits serial correlation.

C)

cannot conclude that the regression exhibits either serial correlation or heteroskedasticity.

**Explanation**

The Durbin-Watson statistic tests for serial correlation., which is higher than the lower Durbin-Watson value (with 2 variables and 90 observations) of 1.61. That means the hypothesis of no serial correlation cannot be rejected. There is no information on whether the regression exhibits heteroskedasticity.

(Study Session 2, Module 5.7, LOS 5.k)

My question is for my table the L and U stats for 90 obs and 2 variables L = 1.61 U =1.70. With the given DW stat of 1.71 can’t you conclude that there is serial correlation by rejecting the null that it is 0?