I have started to study for Level II relatively recently and have a few questions:
How can these three formulas be understood or derived? Significance of correlation coefficient: t = r * sqrt(n - 2) / sqrt(1 - r^2)
Estimation of b1: b1 = cov(X, Y) / variance of X
Standard error of forecast: sf = SEE^2 x (1 + 1 / n + (X - Xmean)^2 / ((n - 1) * variance of X)
A formula for forecasting FCFE with the help of a target debt to equity ratio (DR) was discussed in LOS 34.e, but I don’t really understand it, and I can’t understand how it fits together with the rest of the FCFF/FCEE equations. FCFE = NI - (1 - DR)(FCInv - Dep) - (1 - DR) * WCInv
How do I use the Hansen method to correct standard errors?
Why is the earnings retention ratio abbreviated as b?
It’s no different than any other test statistic calculation.
t = (estimate - hypothesized value)/standard error
r is the sample estimate for the true correlation, rho. The test is for a non-zero correlation coefficient so zero is the hypothesized (null) value for rho.
the standard error of the correlation coefficient is: [(1- r^2)/(n-2)]^0.5; R-squared gives the % of sample variation in one variable explained by the other (in this case with only an X and Y variable). So, 1 minus R-squared gives the unexplained variation of one variable by the other. Degrees of freedom is (n-2). So, (1-r^2)/(n-2) can be thought of as a variance, and we take the square root to get a standard error for r.
Putting it together: (r - 0) / {[(1-r^2)/(n-2)]^0.5} —> (r-0)* [(n-2)/(1-r^2)]^0.5 = r * sqrt[(n-2)/(1-r^2)]
It’s a bit of rearranging the same process you use for other hypothesis tests (they just don’t do a very good job explaining that in the text).
This is used to obtain the uncertainty surrounding an estimate of Y (not the mean of Y) given a value of X (in SLR). Think of what this formula is telling you. The SEE^2 is (more or less) the total variation in the errors of prediction. So really, we have 3 parts to this formula:
sf = SEE^2 x (1 + 1 / n + (X - Xmean)^2 / ((n - 1) * variance of X)
The SEE^2
(SEE^2)* (1/n)
(SEE^2)*[(X-Xbar)^2]/[(n-1)*VAR(X)]
Part 1: is the unconditional variance of the errors of prediction
Part 2: can be thought of as an “equal share” of the variance for the errors of prediction
Part 3: let’s look at this in a few more parts starting with the denominator; (n-1)*VAR(X) essentially gives you the sum of squares for X about its mean (total variation). So, now we can see that (X-Xbar)^2 in the numerator allows us to answer the question: “For this value of X, how much out of the total variation in X can we attribute, relatively speaking?” Further from the mean is more uncertain (larger piece of SEE^2) and more unusual. Essentially, this third term “customizes” the increased uncertainty in predicting Y to reflect the relative weight of that particular value of X used in the prediction.
Adding all the parts is just how we account for the additional uncertainty surrounding an estimate of Y for a particular X value (closer to the mean of the X variable is less uncertainty). You should also start to see that the prediction intervals for Y, and the confidence intervals for the mean of Y, will each be most narrow (precise) at the mean value of X.
Might I direct you to my post a few prior (I recall going through this question once or twice before on this forum, too). In seriousness, I don’t know why the books complicate things by not explaining it or at the very least defining the standard error for the statistic. A lot of people (beyond the forum, too) scratch their heads and wonder what this “new” formula is when it’s really the same old thing with a standard error appropriate for the given estimator.