I have a really hard time understanding this part of material. My confusion is:
To use autoregressive model, it has to be covariance stationary (same mean, covariance). If a model’s residual is not auto-correlated, then the model is well-specified (covariance stationary). However, random walk model’s error term is uncorrelated, but it is NOT covariance stationary.
This seems quite contradictory to me, and the textbook does not explain it clearly. Does anyone have any ideas?
where I have used the bilinearity of the covariance, as well as the fact that the covariance of two independent random variables is zero. Now assuming (without loss of generality) we start the random walk starts at zero, then you can write
where s is the standard deviation of error terms and I have used the fact that the errors are uncorrelated. Now to summarize the findings for the covariance:
cov(yt,yt-1) = var(yt-1) = (t-1)*s,
meaning that covariance increases with time and a random walk is not covariance stationary (even though its residuals are not correlated).
Typically, for a stationary time series, either autocorrelations at all lags are statistically indistinguishable from zero, or the autocorrelations drop off rapidly to zero as the number of lags becomes large.
(Institute 443)
Institute, CFA. 2018 CFA Program Level II Volume 1 Ethical and Professional Standards, Quantitative Methods, and Economics. CFA Institute, 07/2017. VitalBook file.
The first model we learned is the linear regression model. This model works great when data points are linear. -Problem with time series is that data aren’t linear, it is exponential. What this means is that the error term tend to correlate with one another. To fix this we use the log linear instead, which will remove the correlation.
-Sometimes log linear isn’t enough, and we use auto regressive model. THE WHOLE POINT IS TO MAKE THE ERROR TERM UNCORRELATED. Covariance stationary is more than just uncorrelated error term. it also has conditions for the expected mean, and variance.