Larger t-stats lead to more Type 1 errors?

I’m a little bit confused about the effects of heteroskedasticity and serial correlation. According to the book it says that an issue with heteroskedasticity (and serial correlation) is that it will often lead to standard errors that are too small which leads to larger t-stats and more Type I errors (more rejection of null hypothesis when it is actually true).

Shouldn’t it be the other way around? The probability of Type I error is α and the confidence interval should be (1 - α). For example when the confidence interval is 95% the probability of Type 1 error should be 5%. But the higher the confidence interval the larger the t-stat, so it should be the case that larger t-stats lead to less Type 1 errors, not more.

I’m having trouble spotting the error in my thinking. Can someone clarify this for me?

Yes, it should be the case that larger t-stats lead to less type 1 errors, but in the case of heteroskedasticity your error term is biased and therefore your t-stat is no longer valid.

So. if your t-stat is not valid, and too high, there’s a greater chance of it being >t-critical and leading you to incorrectly reject the null.