Freaking out! PLEASE EXPLAIN QUANT to me.

Yes, I’ve read the whole curriculum and studied it but now that im reviewing I totally forgot quant and I need to refresh my memory.

I have a table that I think will help me a lot to review quant effectively. Its from kaplan and it goes:

Assessment of a Multiple Regression Model

  1. Is the model Specified correctly - How do i know if it’s specified correctly, what sort of test do I do?

  2. If it’s not specified correctly, how do I correct it?

  3. If it is correctly do the t-test and f- test. Ok all good here.

  4. Is heteroskedasticity present? - If yes, do a Breush Pagan Chi square test to test if it’s conditional, and if it, use the white-corrected standard errors (what are these??)

5.After dealing with heteroskedasticity, do a DW test. If yest, Use hansen method to adjust standard errors (please explain this to me).

  1. If the DW turns no, does it have siginificant multicollinearity? - What sort of test do i do here? how do i test this?

  2. if no multicollinearity is present then the model is good, but if it isn’t just drop one of the correlated variable (how do i know which one is correlated?

Thank you very much i really appreciate the answers or explanations of these steps.

A (perhaps too simplified) explanation for the time being:

  1. Look at your R^2, is it high? Low? Look at your t-stats, and p-values for all your slope coefficients. Are they significantly different from zero? Do you omit any variables that might be significant? Remember that omitting a relevant variable might cause a bias in your slope coefficients.

  2. A tough one to say. Theoretically, you go back on the drawing board on this one. Is the data properly calculated/identified/presented? If not, you might have to do some data cleaning, processing. What is the problem that you see after you do point 1 above? The answer here depends on the problem you identified previously.

  3. Similar to what you see in step 1.

  4. Conditional heteroskedasticity is present when your error term is correlated with your independent variable. To give an example, think of a dataset which measures expenditure (dependent variable) based on income (independent variable). On the lower part of the income, you might not see much variation in expenditure - whoever earns low income has to satisfy the basic (food, shelter, etc). However, on the higher part of the income, you will see more variation in expenditure. One data point might suggest very low expenditure (a person who saves a lot, or thrifty), whilst another might suggest a high expenditure from conspicuous consumption, etc. This dataset has conditional heteroskedasticity, in which your prediction error will vary more as your independent variable (income in this case) increases. To test this, you collect all the error terms (in this case, from the regression of expenditure based on income), and regress the squared error terms with your independent variable. In our example, you will see that the variation in income explains some of the variance in the error-term-squared. You can then calculate the test statistic (N * R2) and either accept/reject the null hypothesis of no hetereoskedasticity is present. For CFA purposes, white-corrected standard error is just a new standard error that is calculated to correct for the heteroskedasticity.

  5. As far as CFA Level II is concerned, Hansen’s method has the same purpose of white-corrected standard error, except Hansen’s method will also correct for serial correlation. Again, I highly doubt you need to know in details, except that Hansen’s standard error is used to adjust for serial correlation.

  6. DW is used to detect for serial correlation (where the error terms are correlated across your dataset). To detect multicollinearity (where the independent variables, or the xi are correlated in your dataset). You look at your R2, F-stat, and t-stat. If you have a high R2 and F-stat (indicating that your regression as a whole is robust), but your t-stat indicates that individual slope coefficients are not statistically significant, you might have multicollinearity issues in your regression.

  7. Behind the scene, you would calculate it (or more precisely, tell the software to calculate it). For CFA purposes, you just need to know that you want to drop the independent variable that is highly correlated with the other.