Positive/negative serial corr & Multicoll - Different effects on T-Stat??

I read the following 3 points, but it doesn’t make sense to me: 1. Positive serial correlation => results in Low st. errors and High T-Stats 2. Negative serial correlation => results in High st. errors and Low T-Stats 3. Multicollinearity => results in High st. erros and Low T-Stats The first makes sense to me. But #2 and #3 don’t. Why would the positive/negative aspect of serial correlation have a different effect on Standard Errors since standard deviation is usually evaluated on an absolute basis (i.e. St. deviation is the average deviation + / - from the mean). Also, one would think that Multicollinearity would result in artificially LOW standard errors since the independent variables are highly correlated. Are the above 3 points correct, or no? Either way, any explanation is MUCH appreciated.

This is what is from my mind: Serial correlation: High T-stats because of low st. errors Mutli: High Fstat, low T-stat and high correlation between independent variables.

I just read multicollinearity = high R^2, sig F-stat, insignificant t-stats I’m pretty sure that condit hetero and serial correlation = high t-stats, insignificant f-stats, low standard errors ---- please confirm this data as it is off the top of my head

also conditional hetero can be corrected with robust standard errors but serial correlation with…the hansen method to adjust the stand errors…is that correct?

Negative serial correlation implies a positive error followed by a negative error followed by a positive error etc. It really isn’t a big issue since the error terms tend to cancel each other out

rhythm - you are correct

Hansen method simultaneously corrects for heteroskedasticity.

  1. Negative serial correlation => results in High st. errors and Low T-Stats I dont think this is true. SC has low std errors and high t stats. Thats it.

rpr, your assumptions are for positive serial correlation only. Negative is the opposite,

^precisely. negative serial correlation results in Type II errors, while positive serial correlation (more common in investments) result in Type I errors.

The effects of +ve and -ve serial correlation are opposite on the standard error term. With +ve your errors get bigger and bigger, so there’s an accumulation of deviation which translates into a larger standard error term (than might be justified, I think is the argument). With -ve your errors take turns on either side of your regressions line - now above, now below. But there’s no indication here that the deviations themselves are large (which over time they will be with +ve one way deviations), so the standard error term comes out smaller than perhaps it should be. Since the coefficients are unaffected, the denominator makes the difference - when it’s too big (+ve SC) the t-stat comes out too small; the opposite for -ve SC. If we’re not careful we end up rejecting the null with the too big +ve t-stat (that’s false significance, Type I). With -ve the problem is the reverse, of course - false insignificance (Type II), we fail to reject when we should. To correct these use White’s robust standard errors (you should memorize these complex formulae, as they’re certain to come up). For Multi-C (which gives too small t-stats), use Hansen robust standard errors (these will work even where SC is an issue).

white std errors correct heteroskedasticity. hansen corrects for both serial correlation and heteroskedasticiy, not mutlicollinearity. there is no fix for mulitcollinearity other than omitting variables (since the indy variables are correlated)