Study Session 3: Quantitative Methods for Valuation
TL;DR: Does non-stationarity matter for trend-based time series models (i.e., time-series regressions that are not auto-regressive)?
I came across a question from a third-party provider in which we’re provided with two trend based time-series models - both for sales. One is a single-variable regression that forecasts Sales. The second is a single-variable regression that forecasts ln(Sales). In both models, the independent variable is time (e.g., 1, 2, 3,…, 12 for each quarter of the last three years).
I’m a little hung up on when we need to use the DW test vs. a t-test. My understanding of serial correlation is that it is the correlation of our error term vs. itself at different lagging intervals: r( et , et-x )
We can solve for that value and then plug into the DW formula and see if there’s serial correlation. However when I review the section on Auto-Regressive models (testing a variable against itself) I saw that we CANNOT use DW to test for serial correlation in an AR model.
From CFAI :
Suppose that you deleted several of the observations that had small residual values. If you re-estimated the regression equation using this reduced sample, what would likely happen to the standard error of the estimate and the R-squared?
The answer is:
SE Increases and R-Square Dec
SE = (xi - x)/ n-1 so if n decreases SE will increase were as R-Square is 1- Unexplained/Total Variance….. would decrease in observation reduce Total variance and the ratio of Unexplained /Total increase there by reducing R-Square?
Would b0 and b1 ( b0 hat and b1 hat) values be provided in the question itself?
in the curriculum examples the values are given would that be the case in mocks(exams) as well.
What is the difference between the 2 models know that the seasonality model eradicates the autocorrelation due to seasonality by adding a lag. But the AR2 also does the same. The only difference is that seasonality uses log on the formula and AR2 doesn’t, right?
I have a silly question regarding “Multiple R” (and I didn’t find the answer in the forum). Multiple R is defined as the correlation between predicted and actual values of the dependent variable. Why the values of the IV are called “predicted values” and the values of the DV the “actual values”?
Also, it is ok to assume that in simple linear regression (with 1 IV), the “r” can be between -1 and 1 but the Multiple R can go between 0 and 1? Why is that? How we can calculate Multiple R?
Thanks in advance!
Reproduced in image below.
What do the t-statistics shown in the table represent? Are they simply the t-statistic of alpha (intercept) and beta (slope coefficient) at a significance level which is undisclosed to us? I know it’s ancillary to the questions being asked, but I’m just trying to make sense of the various numbers shown in the question stem.
Study together. Pass together.
Join the world's largest online community of CFA, CAIA and FRM candidates.