 # Quant: F-statistic

We know that F-statistic is used for testing whether all of the slope coefficients in a linear regression are (equal) not equal to 0 - this makes sense when we are trying to test for statistical (in)significance of slope coefficients. But, what confuses me is that it is a one-tailed test and uses an equal sign. Can you please elaborate from a conceptual point of view why F-test is a one-tailed test with an equal sign.

Trekker, Since F-statistic is calculated using the sum of squared errors you can’t have a negative value. So when you are testing to see if the equation has any significance, you are only testing for positive values. However, with when you’re calculating the t-statistic you have the correlation coefficient in the numerator, which can be positive or negative. Hope that helps.

It is really a 2-tailed test but the way we use it - it becomes a 1-tailed test. This is so because we always force the numerator to be larger than the denominator. For a F-test the opposite tail is simply the inverse (1/x).

Thanks guys. greyhound86 - that makes perfect sense. Along with positive result of sum of squared error, the regression sum of squares is also positive and this translates into a positive number (since F-statistic = MSR/MSE). AMCC - not sure that I understand your explanation, but please refer to greyhound86’s response above.