ANOVA - F-test

I can’t wrap my head around this.

“The F-statistic tests whether all the slope coefficients in a linear regression are equal to 0” - Volume 1 pg 292-293.

Why would we want to test for this? If H0: b1=0 is true, wouldn’t that just be a horizontal line because there would be no slope?

Yes, that could perhaps be the case.

The F-Test tests the overall significance of the independent variables in explaining the dependent variable. A result indicating that there is no significance on the dependent variable is still a result…

It can also be used in conjunction with the coefficient of determination to identify possible risks of multicollinearity in the model as there are inherrent weaknesses in using R^2 alone to test model significance.

If the F-statistic falls into the acceptance region – so we cannot reject the null hypothesis that all of the slope coefficients are zero – one possibility is that the model is useless: the independent variables do not explain any of the variation in the dependent variable.

Another possibility is that the model exhibits multicollinearity: having an insignificant F-statistic for all of the slopes together, but _ in _significant t-statistics for one or more all of the individual slopes would indicate multicollinearity.

I believe that this is slightly backwards.

The F-statistic is a joint test for significance that is unaffected by multicollinearity. Multicollinearity will inflate the variances of the estimated coefficients, and therefore, the standard errors for the estimated coefficients, making it more likely that _ t-tests on individual terms _ will appear _ nonsignificant _.

If the global (testing all coefficients except the intercept) F-test results in failing to reject the null hypothesis, the model _ is statistically useless _. None of the terms tested were statistically useful for predicting the dependent variable.

In sum, multicollinearity will affect individual t-statistics through the estimated standard errors of the related coefficients. Therefore, an indicator of multicollinearity is that although no individual t-statistics for slope coefficients are significant, the F-test (joint significance) shows that at least one of the tested coefficients is different from zero.

There are other measures to identify the presence/degree of MC, but I think this answer is most relevant.

Also, to add to Dwheats: when evaluating a model (assuming a significant F-Test), it is imperative to examine other model-based statistics such as R-squared, the SER, CV, etc to determine the practical utility of the model. In a statistical sense, using R-squared by itself is not any form of “test” (no hypotheses, measure of reliability, general conventions…)

To the original question: If you failed to reject the null, it would indicate that your (tested) model is no better than using the sample mean for prediction (as you said, horizontal line). However, this just means we need to try new variables to help us predict our DV (it does not imply that the TRUE model is a horizontal line).

Not slightly backwards: _ completely _, _ utterly _, _ absolutely _ backwards.

Good catch; I’ve corrected it

I try to be less confrontational than some haha