An analyst is trying to estimate the beta for a fund. The analyst estimates a regression equation in which the fund returns are the dependent variable and the Wilshire 5000 is the independent variable, using monthly data over the past five years. The analyst finds that the correlation between the square of the residuals of the regression and the Wilshire 5000 is 0.2. Which of the following is *most* accurate, assuming a 0.05 level of significance? There is:

A)

no evidence that there is conditional heteroskedasticity or serial correlation in the regression equation.

B)

evidence of serial correlation but not conditional heteroskedasticity in the regression equation.

C)

evidence of conditional heteroskedasticity but not serial correlation in the regression equation.

**Explanation**

The test for conditional heteroskedasticity involves regressing the square of the residuals on the independent variables of the regression and creating a test statistic that is n × R2, where n is the number of observations and R2 is from the squared-residual regression. The test statistic is distributed with a chi-squared distribution with the number of degrees of freedom equal to the number of independent variables. For a single variable, the R2 will be equal to the square of the correlation; so in this case, the test statistic is 60 × 0.22 = 2.4, which is less than the chi-squared value (with one degree of freedom) of 3.84 for a p-value of 0.05. There is no indication about serial correlation.

A is correct, I understand most of this except why is the CHI Sq DF = 1? Is there something I’m forgetting?