Serial correlation and Durbin Watson

The text says: Durbin Watson is appropriate for trend models, but not autoregressive models. Why is this the case?

A trend model does not explicitly account for the autocorrelated errors (the trend is just another way to help explain the dv), whereas an autoregressive model does account for the AR process. So you can still apply the DW test to a trend model.

Think: once you have accounted for the autocorrelated errors, it is meaningless to use the DW test (AR model). Lastly, the DW is most powerful only for an AR(1) process. This means it will not be as effective at detecting higher-order AR processes.

Can someone explain further why Durbin Watson is not appropriate for autoregressive models?

DW should only be used when it is likely that the errors are correlated due to the time-series nature of the data. If you have used an AR model, you have removed and accounted for the correlation between the errors. Therefore, there is no correlation among the errors to test for. It is no longer appropriate to apply this DW test.

If you apply DW to data that are not going to have correlated errors, you can run into sticky situtations. For example, if you use DW on cross-sectional data, you can still get a significant test statistic, BUT it is MEANINGLESS. Research has been done to show that randomized data can produce significant DW tests (but this does not indicate autocorrelated errors). For this reason, applying the DW test requires a little bit of logic.

So, going back to the fact that an AR model removes the correlation from the error terms, it is meaningless to apply this test (and still possible that you can get a significant DW test). Using statistical logic, you can say that the DW test is inappropriate in an AR model, because you have already remedied the problem for which you are testing.

Bottom line: if you remedy this problem, it makes no sense to test for this problem.

Hope this helps!

Okay thanks

Was that what you were looking for?