Heteroskedaticity: What is it?- When the variance of the residuals is NOT the same across all sample observations. This creates a problem with your multiple regression analysis. What are the effects on reg. analysis?- 1- std errors are unreliable, 2-coefficient estimates are unaffected, 3- if std error is too small, then the T-stat will be too big and vice versa. (t-Stat= Coefficient/std err), 4- F-Test is UNRELIABLE. How do you Detect it?- 2 ways. 1- look at the damn picture, 2- Breusch Pagan Chi sq. test (n x R^2) How do you Correct it?- Robust std errors or White Corrected std err. Remember the std error is inaccurate so some white guy had to correct it. You now use the corrected std error in the denominator to calculate the T-stat and viola!
that was good, thanks
Serial/Auto Correlation: What is it?- When residual terms are correlated with each other What is the effect on regression analysis,?- std errors are too small, coefficients remain accurate. With positive correlation, your std error is too small, which leads to large T-stats, which gives you too many Type 1 errors (rejection of null when it is true). F-tests are UNRELIABLE How do you detect it?- Durbin Watson test, DW=2(1-R). Closer to 4= negative correlation, closer to 0= positive correlation How do you correct it?- adjust the std errors using the Hansen method. Hansen will dominate over White/Robust in presence of both Heteroskedaticity and Seral Correlation. Use Hansen if both are there, use White/Robust if Hetero only.
Multicollinearity: What is it?- when two or more independent variables are highly correlated with each other. What is the effect on regression analysis?- This distorts the std errors, but on Multicollinearity, your chances of making a Type TWO error increase (incorrectly concluding that a variable is not statistically significant). How to detect.- look at the T-Test. If none of the coefficients are stat. different than zero WHILE you have a significant F-test and high R^2, chances are multicollinearity is present. On the coefficients, if the P-value is high, it will not be significant. If the Pvalue is low, the coefficient is statistically different than zero. How do you correct for multicollinearity?- omit the variable
Review: Hetereoskedaticity- variance not same across sample obs, std errors unreliable, FTEST unreliable, BREUSCH PAGAN CHI TEST nxR^2 to detect. White corrected/Robust std errors to correct. Auto/Serial Correlation- residual terms correlated with each other (positive or negative). With positive-low std errors-big T stats- Too many Type 1 errors. FTEST unreliable. DURBIN WATSON to detect DW= 2(1-R). closer to 4 is reject Ho casue its negative, closer to 0 is rejection of H0 and its positive. in the middle is fail to reject. Multicollinearity-two or more independants that are highly correlated with each other. too many TYPE 2 errors. look at t tests, if no coefficients are different than zero AND you have sig. F test and high R^2 you might have multi collinearity so omit one of the variables.
I would think the F-test is unreliable because your sum of squared errors are unreliable which flows to MSE (the denominator the the f-test). Good review
So what does one do to test for Covariance Stationarity?
*bump* Anyone answer this?
dickey fuller test for a unit root.
Dickey fuller test
Wow… is it really that easy? Haha… all this time I thought there was more to it than that. Thanks.
Autoregressive heteroskedasticity (ARCH) models incorporate past patterns of yield volatilities to forecast future patterns.