Conditional Heteroskedasticity on t stat

Qbank question Concern 1: If my regression errors exhibit conditional heteroskedasticity, my t-statistics will be underestimated. Answer: False, The consequence of conditional heteroskedasticity is that the standard errors will be too low, which, in turn, causes the t-statistics to be too high. How is it that standard errors will be too low, wouldnt higher residuals and thus higher variance cause increased standard deviation??

Yes, you are right, it doesn’t make sense to state it like that, but I think what is meant here is that if the errors are large, you’re ok, because in that case you will not have significant coefficients (i.e., you will have small t-stats, and you fail to reject H0)…so there’s nothing to worry about (i.e., no chance of making the horrible type I error). The problem occurs if conditional heteroskedasticity results in small standard errors, which then will cause you to accept bad coefficients. So, I believe it is not correct to say that t-statistics will be underestimated, because they could be underestimated, overestimated, or correctly estimated. It’s just that if they are overestimated, they cause damage. Any stats gurus to chime in?

To get the t-stat (calc), it’s something like: t= (b0-b1)/s Where ‘s’ is the standard error. A property of conditional heteroskedasticity is producing small standard errors. Thus when you go to produce a t-test (to see if the slope coefficient is significant against zero) you end up calculating and artificially large t value to compare against the t-critical value. Thus it might make t(value) larger than t(critical) indicating the slope coefficient is significantly different than zero, when in fact it might not because because conditional heteroskedasticity produced a small standard error.

also please don’t quote me on the formula at the start of the message, I was too lazy to look it up, but I know standard error is alone in the denominator

Reggie, both messages perfectly right.

The question being asked is why must we have small standard errors, Reggie. We agree that it causes inflated t-stats if they are small, but are they always small? What exactly is conditional heteroskedasticity? Isn’t it when residual errors are not constant and have a pattern in them? An error between actual and predicted could be big or small.

Yeah I’m not too sure. I just know CFA text and Passmaster have been saying that: Conditional heteroskedicity is when there is correlations in the error term. One of the assumptions for linear regressions is that the error terms are uncorrelated, so therefore conditional heteroskedicity makes a regression invalid. Also one of the results of conditional heteroskedicity is that it produces artificially small standard errors. I understand what you’re saying with, why must they be small because standard errors could be large and still correlated. And the answer is I’m not too sure. It’s difficult for some people (like me) to understand 100% of the material, so sometimes I just have to memorize what the book says and use that for my answers. If I had to hypothesize I’d guess that when C.H. occurs the errors are correlated with each other, so if you regress the equation the error is compared with the previous error the differrence is smaller (generally) then if the errors were uncorrelated But I have no clue, like I said, but if someone knows the reason I’m actually pretty interested to hear right now.

I think ( I may be wrong ) that you test for conditional heteroskedasticity only after your original regression looks good ( low R-squared) . So let’s take it that the t-stats is high . Let’s also say that you suspect conditional heteroskedasticity. In that case you really mean that you don’t trust t-stats because standard error is too small ( underestimate ). If you are right and there is conditional heteroskedasticity then it MUST be because there is an underestimate of the standard error. Its like a chicken and egg thing. If standard error was being estimated high, the original regression would tell you t-stat is low and you’d not HAVE to test for CH. Its only when the original regression seems to be valid , that you test for CH and the only conclusion if CH is present is that the orginal regression underestimates standard error and gives artificially high t-stats

I believe that conditional heteroskedasticity has to do with increasing/decreasing variance in the observations you are trying to regress as the independent variable changes. Correlated errors in the standard error is “serial correlation”. Say we have increasing variance as the indep. variable increases. As you move right along the graph, the observation points will be further and further away from the regression line (predicted values). Therefore, a larger and larger part of the error term becomes a result of the difference between Ybar and Yactual, as opposed to Yactual and Ypredicted (think SS regression vs. SS residual). This results in the standard error decreasing, and as explained earlier, t-stats artificially increasing. P.S. I don’t guarantee 100% accuracy on that answer :slight_smile: