Explanations in Schweser for each seem almost identical, so I’m worried that this will come up on test day and be hard to distinguish. Anyone have a good way to keep these two straight?

Thanks!

Explanations in Schweser for each seem almost identical, so I’m worried that this will come up on test day and be hard to distinguish. Anyone have a good way to keep these two straight?

Thanks!

Can anyone help?

There is big diffrence in these two. R^2 is indicator of whether an individual factor has explanatory power or not. F is an indicator of whether the whole model has explanatory power or not. In ideal case, either you want both to show the explanatory power or both to not show. You don’t want if F shows explanatory power but individual R^2 don’t.

F test and R2 are both at model level. F test specifies whether independent variables are jointly significant in explaining the dependent variable. R2 measures how well the estimates have explained the actual dependent variable - it is a measure of the strength of the model. R2 & Adjusted R2 are mainly used to assess if the addition of an independent variable has contributed to increased strength of the model. F test is mainly used in conjecture with t test to check and correct for situations where the independent variables are jointly significant (thro F test) but independently insignificant (under t test). Think of the Anova table to remember R2 (need only RSS & TSS) and F test (need square root MSR & square root MSE - ie RSS, TSS, k, n-k-1).

what does everyone think of that?

And F test is just MSR/MSE, not sq root.

let me try and make this clear. R2 is the % of variance explained by the model. that is why it is RSS/SST.

F test is a test stat that is calculated as MSR/MSE. It tells you whether at least one of the coefficents explain varriation of the dependent. Its a test of the model as whole and can be used to infer multicollinarity if tthe coefficants are not significant but the F stat is

I dont know why you would sq root MSR/MSE unless you are confusing it with SEE which is the sq root of MSE

SEE, like R2 tells you how well the model explains the variance of dependant. But SEE is a sandard deviation so you want this to be small while you want R2 to be large.

sorry for the typos, I am tapping this out on a Droid