Sorry you’ll probably see a few posts from me tonight as i’m marking a mock exam but the answer sheet only has a couple lines for each question annoyingly. I am trying to piece time series together. The below may be a bit wordy but if anyone can just clarify what i’m saying and answer any of the questions that would be amazing.
With Covariane Stationarity are we essentially saying that any time series model that is:
expected value that is constant and finite
Constant and Finite time series
Constant and finite covariance of time series with itself
So, if we add more and more years of data there is an increasing probability that this covariance stationarity will become nonstationary. Is this the same thing as saying with more data points there is an increasing probability of finding lags in the data?
Generally speaking we are interested in time series data with more periods right? More periods is more desirable so long as the series remains covariance stationary?
Right, now lets say we have nonstationarity , is this where an AR model comes into play? The AR model will test for lags and if it is correctly specidied we should have no autocorrelation (which is one of the original reasons we are nonstationary)
So in all of this where does an ARCH model come into play? And what about random walks and Unit roots?
For anyone that didn’t fall asleep by the end of this post I will be eternally greatful for any explanations.
Finally, when we are testing for the significance of autocorrelations. We want to make sure that the autocorrelations are not statistically significant. , but how come the intercept and slope coefficients should be significant?
I don’t think you’ll need to go into this much detail in the exam. Just leave it and move on. I think yes - more periods = better as long as the series is stationary.
An AR model doesn’t automatically mean non-stationarity. An AR model is simply the random variable X being regressed onto itself. so Xt = bo + b1Xt-1 + error term.
For the model to work, it has to be covariance stationary.
ARCH model is AR + CH i.e. autoregression along with conditional heteroskedasticity. What does that mean? It means meaning regress the error terms onto themselves to test whether or not there is conditional heteroskedasticity (i.e. are they related to each other? You want the answer to be NO). If you end up ACCEPTING the null hypothesis that’s a great thing because that means one error term isn’t related to the next! So this is the opposite of normal regression where typically you want to reject the null.
The answer to your question on “how come the intercept and slope coefficients should be significant” is that you’re mixing up normal regression with ARCH. Normal regression wants you to REJECT the null i.e. you want to reject that there is no relation because you WANT there to be a relation between X and Y. But in ARCH world you are regressing the error onto the error itself, and in this world you DON’T want there to be any significance because if there is then it means the erorr terms are related to each other and you DON’T want that!
Onto random walks - thats a special case of an autoregressive model (X regressed onto itself) whereby the intercept is 0 and b1 (the slope) = 1. Unit roots test whethere there is non stationarity. The problem with them is that they are NOT covariance stationary so they break one of the main clauses. The Dickey Fuller comes in here to test for unit root (is b1-1 =0?). If b1 indeed = 1 , it means that the slope =1 and therefore there IS a unit root problem. Then next up is Engel Granger which says, hold on, you can still do linear regression using two regressions (co integration) under two conditions only (a) neither has a unit root OR (b) both have a unit root and are mutually cointegrated.
I just want to add that I think the Dickey-Fuller test is the only one where the Null is that it DOES have it (unit root). All the others are Not (Breush (HO: Unocrreolated), Durbin (HO: NO serial correlation).