Time Series

In the example for autocorellations in both Schweser and CFAI, they list the lags and their autocorrelations, my question is, for 59 observations, they only list lags 1-4, why don’t we have to test ALL the observations for autocorrelation?

The 1-4 lags are possible explanatory variables ( think degrees of freedom ). The observations are the thing that’s going to create your sample for the regression test , and the more observatons ,the better, as your mean standard error will reduce for the same sum of squared errors. Generally in regression problems you have a few explanatory variables ( like 2 or 3 , not more than 7 or 8 in APT ) . But observations usually at least 30 , again more the better

Appreciate the response, I am somewhat unclear in that case - what is a lag then? I thought it was just the IVs at each point in time…

We should see significance of autocorrelations for ALL lagged variables. If in some examples they are showing only 4, assumes that autocorrelations for other lags is not significant and they chose not to print it. I have never done any regression, but I guess, when you actually do regression using a software, it will show autocorrelations for ALL lagged variables and not only first few. Then based on their significance, you may choose to add/subtract your number of independent variables.

I see so when they say here are lags 1-4, they just mean, here are the first four data points?

Yes, first 4 data points starting backwards from today. Lag1 being that variable’s past value at time t-1, Lag 2 being that same variables past value at time t-2 and so on…

If i can predict next year’s sales from last year’s sale, then Lag 1 is important and should be considered as an independent variable in my regression. And If i can predict even better, knowing last 2 year’s sale, then even Lag 2 is important and should also be considered as an independent variable. But, say, i know my sales in 1960, and that does not help me predict my next year sales any BETTER, then that Lag 50 is not important and should not be considered as another independent variable in my estimation. Hope it clarifies lags a bit.

Thanks a lot, I appreciate the help. One more question… What is the difference between error autocorrelation and ARCH?

I may not be correct on this one, but this is what i think: Autocorrelation is when the ‘value’ of an error term is correlated with the value of another error term. And ARCH is when ‘variance’ of an error term is correlated with the variance of another error term. Can someone please confirm/correct this.

rus1bus Wrote: ------------------------------------------------------- > I may not be correct on this one, but this is what > i think: > > Autocorrelation is when the ‘value’ of an error > term is correlated with the value of another error > term. > > And ARCH is when ‘variance’ of an error term is > correlated with the variance of another error > term. > > Can someone please confirm/correct this. If by error term you mean variable, then yes it is correct.

I know this is an old post, but it might confuse someone today… the explanation made by rus1bus is correct. It’s “error term”, no “variable” as idreesz stated.