Autocorrelation, Heteroskedasticity, Multicollinearity & Coffecients

I have a question about Schweser Exam Workshop Question 4: Annie Small, Problem #3 and Question 1: Jeffrey Simpkins Problem #5.

In both questions, they are testing the same concept and I am confused about when the coefficients are effected. My rule of thumb was:

Heteroskedacity: The coefficients are not affected, but the standard errors are incorrect.

Autocorrelation or Serial Correlation: The coefficients are not affected, but the standard errors are incorrect.

Multicollinearity: Both the coefficients and the standard errors are incorrect.

That led me to answer B for Problem #3, as the coefficients for Bo and B1 would be valid but the standard errors would not.

Annd then I got the question wrong because apparently the estimates would be invalid since there was autocorrelation in the residuals.

Can anyone help explain this for me?

It is one thing to be valid and another thing to be correct. The problem with autocorrelation and heteroskedasticity is that the t-tests are inflated and can lead to the conclusion that they are statistically significant (or valid, as worded here). You might have noticed than when we correct for these to violations, all we do really is deflate the t-tests, we do not change the coefficients as they are not incorrect.

Correcting for multicollinearity however will typically lead to removing a variable and re-estimating the model. This means that we will get a new coefficient - hence our previous one was incorrect.

In the first case you face a problem where a coefficient might not be significant. In the second one you get a coefficient which is downright meaningless.

Why is validity important? If autocorrelation or heteroskedasticity inflated the t-tests enough to make you believe that the coefficients are important, you would deduce that there is in fact explanatory power in your variable while there isn’t.