Hi all - I’ve just read about unit roots for the first time and the Dickey Fuller test. I’m not 100% sure I see it…

So I get that if the coefficient on the independent variable (a lag of the dependent variable) is 1 then there is a unit root because the only change in the dependent variable is the error coefficient (plus the drift, aka b0). And we can say that a value of 1 is NOT covariance stationary in this case because you can not predict the mean value of the dependent variable. The book goes on to say that the absolute value of the coefficient on b1 (the weight on the lag) must be strictly less than 1. If it were greater than 1 we would have an ever increasing mean and variance approaching infinity as t approached infinity. That makes sense.

Negative values of b1 (values from -1 to 0) are giving me a little trouble in understanding how they are covariance stationary because that would imply a much higher variance in the earlier part of the series (oscillation between pos and neg as the series approaches a limit of 0) than the later.

My biggest problem however is with the Dickey Fuller test itself. The book says that g0=b1-1=0 and therefore not covariance station is the null hypothesis for the Dickey Fuller test and the alternative hypothesis is g0 is less than 0 and therefore covariance stationary. First off, a rejection of the null hypothesis: g0=0 would not immediately indicate g0 is less than 0. For example, b1=300 and g0=299. Barring a huge standard deviation, the t-statistic would be huge for most sample sizes and we would reject the null hypothesis that g0=0 and accept… the alternative that g0 is less than 0 !?! I don’t think so. And if b1=-300 and g0=-301 we would also reject the null but would we conclude that the series is covariance stationary?

The test addresses the case of a simply random walk (b1=1) but doesn’t address covariance stationary behavior or the bounds of -1 is strictly less than b1 is strictly less than 1. Am I missing something?

The Dickey-Fuller test regresses the first difference of the time series on itself and evaluates whether the coefficient on the independent variable (b1-1) is 0, i.e. no correlation of the first difference at time t to the first difference at time t+1. We infer that if (b1-1) = 0 then b1=1 and there exists a unit root. The null hypothesis, however, is that b1-1 equals 0 and the alternative hypothesis is that b1-1 is less than 0 and therefore covariance stationary.

So first, rejecting b1-1 EQUALS 0 does not mean that b1-1 is LESS THAN 0 and if b1-1 is GREATER THAN 0 then the series is not covariance stationary (b1>1 is an explosive root!). Assume, for example, that the coefficient on the first lag of an autoregression was 10. That would mean that the next value in the series would be 10 times the magnitude of the previous value (assuming b0=0). This would definitely NOT satisfy b1-1=0 and we would reject the null hypothesis and WRONGLY accept the series is covariance stationary. Likewise, if the coefficient on the autoregression was -10 we would reject the null and accept covariance stationary but this would be another explosive root (this time a negative explosive root).

The book’s criterion for covariance stationary is that b1 (coefficient on the original autoregression) is bound by -1 and 1. For simplicity and without loss of generality, lets assume b0=0 (the intercept is 0). I’m concerned about any regression with coefficients that are in the -1 to 0 range (exclusive of -1). The book claims an autoregression with, for example, a coefficient of -0.999 is covariance stationary. The series should mean revert to 0 but will oscillate continuously from positive to negative. If we evaluate the variance in the earlier part of the time series is will have a higher variance than the later part (once it has approached its mean reversion level). So I would have guessed that negative coefficients on a lag are NOT covariance stationary.

Does that make more sense or are my questions still confusing?

I’m no expert on time series and autoregression and so on, but I may have some clues:

Nowhere do I read that if an AR(1) model has a finite mean-reverting level then it is covariance stationary. What I read is that if it is covariance stationary, then it has a finite mean-reverting level; this doesn’t mean that the converse is necessarily true.

If |_b_1| ≥ 1, then the series clearly doesn’t have a finite mean-reverting level. The fact that you don’t get a zero denominator is a necessary condition for a finite mean-reverting level, but not a sufficient condition. For example, if _x_0 = 10, _b_0 = 1, and _b_1 = 1.1, then all of the _x_s are positive, but _b_0 / (1 – _b_1) = -10; clearly, this makes no sense. Thus, it may be that the Dickey-Fuller test (in practice) isn’t even used unless there’s strong evidence that |_b_1| is less than 1 (or at least not too much bigger than 1).

It’s possible that the Dickey-Fuller test is a one-tail test. I don’t know.

Thanks for the reply S2000. I really appreciate it. Here are my thoughts…

Agreed and I didn’t mean to imply that. What I’m seeing in the book that is throwing me off is (page 454 book 1 top of page), “The null hypothesis of the Dickey-Fuller test is H0:g1=0 - that is, the time series has a unit root and is nonstationary - and the alternative hypothesis is Ha:g1 is less than 0, that is, that time series does not have a unit root and is stationary”.

So… okay, maybe they were liberal in writing this section and simply meant, null: there is a unit root and nonstationary and alternate: there is no unit root and we don’t know if its stationary or not. That would answer PART of my question/concern.

But on page 453 in the middle of the page, “If a time series comes from an AR(1) model, then to be covariance stationary the absolute value of the lag coefficient, b1, must be less than 1.0.” Absolute value implies we have two cases, b1 is strictly less than 1.0 and is positive or abs(b1) is strictly less than 1 and is negative. I understand the former, but the latter is not fitting. Assume the regression x(t+1)=-0.99(x(t)) ; then the series does mean revert but has a much larger standard deviation earlier in the series. I dont understand why they say “absolute value” when, to me, the value should be strictly positive and bound 0 to 1.

Regardless, the Dickey-Fuller test as its explained is grossly flawed…

Rejecting g1=0 implies g1 is not 0, not that g1 is less than 0 (the stated alternative hypothesis). My feeling is that they would need to do two tests, one null hypothesis is that g1 is greater than 0 (implying b1>1) and there is an explosive root) and the second that g1+2 is less than 0 (implying that b1 is less than -1 and there is a negative explosive root).

UNLESS b1 is actually bound by 0 and 1, in which case you would test null g1 is less than 0 (b1 is stictly less than 1) and then also test null g1+1 is greater than 0 (b1 is strictly greater than 0)

Either way, b1 (and by extension g1) is bound on the lower and upper and would require TWO tests.

I understand what you’re saying here. My confusion, I suppose, is what specifically we are testing. If we are looking for a unit root then we are looking for b1=1 and the Dickey Fuller should be a TWO SIDED test where g1=0 is the null and g1 is not 0 is the alternative - and the conclusion should ONLY be that there is not a unit root (reject the null) or we don’t know if there is a unit root (do not reject the null)

Nothing about the bounds of b1 (i.e. |b1| is strictly less than 1) or stationarity can be gleaned with the exception that unit root presence precludes stationarity. That is essentially the crux of my argument, if we reject g1=0 we can say there is not a unit root but can not say that |b1| is strictly less than 1.

I also find serious issue with entertaining the idea of an autoregressive series have a negative b1 and still being elidgible to be covariance stationary. b1 should be bound by [0,1)

Me either… I know what makes sense to me and what is in the book seems like an over-simplification at best.

Dickey fuller is a test for unit roots only - not for mean reversion or covariance stationarity. Hence the confusion. Just because the test concludes no unit roots does not mean that the time series is covariance stationary.