I’m a little, ok to be honest I’m greatly confused by first differencing…

To quote book - “we can compute the mean-reverting level of the first difference model as bo/(1 - b1) = 0/1 = 0.” Why for first differencing are we assuming that b1 =0??? Does it have something to do with the expected value of the error term being zero?

I have no idea why subtracting the value of time series in the period preceeding period, helps to obtain b1 = 1…

If we have an AR(1) random walk and b0=0 (so, no drift, to keep with your example), first differencing will remove the unit root (b1=1). The new (first differenced) series we created doesn’t have a unit root (assuming the original process only had one unit root). This means that b1 for the first differenced series shouldn’t be statistically different from zero (now we have a covariance stationary series).

We assume that b1 of the first differenced series is zero because we’re assuming the original process only contains one unit root (taking the first difference removes the root, as mentioned before). We do this test to make sure we’ve fixed the problem of non-stationarity.

Sorry to re open this topic again, but why the first difference will take the unit root? Mathematically speaking, I can’t see it. Is it because the difference is just the error term and thus, we can assume that b_{1} = 0?