Quant First Differencing

G’day all,

I’m a little, ok to be honest I’m greatly confused by first differencing…

To quote book - “we can compute the mean-reverting level of the first difference model as bo/(1 - b1) = 0/1 = 0.” Why for first differencing are we assuming that b1 =0??? Does it have something to do with the expected value of the error term being zero?

I have no idea why subtracting the value of time series in the period preceeding period, helps to obtain b1 = 1…

If we have an AR(1) random walk and b0=0 (so, no drift, to keep with your example), first differencing will remove the unit root (b1=1). The new (first differenced) series we created doesn’t have a unit root (assuming the original process only had one unit root). This means that b1 for the first differenced series shouldn’t be statistically different from zero (now we have a covariance stationary series).

We assume that b1 of the first differenced series is zero because we’re assuming the original process only contains one unit root (taking the first difference removes the root, as mentioned before). We do this test to make sure we’ve fixed the problem of non-stationarity.

Thanks tickersu, your explanation really helped, I struggle big time with this so sending some good karam your way

No problem and I’m glad it helped. As with most things, repetition and exposure will make it easier.

I’ll find out on the 27th if I caught that karma!

Sorry to re open this topic again, but why the first difference will take the unit root? Mathematically speaking, I can’t see it. Is it because the difference is just the error term and thus, we can assume that b1 = 0?

EDIT: I think I got it.

Thanks!

Hmm, I didn’t get it.

Already got it!! 100%.

I thought that first differences does not guarantee elimination of a unit root in an autoregressive model - it’s just that it could work.