Heteroskedasticity in time series vs multiple regression

I can see that there’s a difference in definition of conditional heteroskedasticity between the time series chapter and the multiple regression. In multiple regression, conditional heteroskedasticity is the situation in which the variance of the error terms is dependent on the independent variables. While in time series ARCH exists if the variance of the residuals in one period is dependent on the variance of the residuals in the previous periods. Could someone please explain the reason for the following difference?

The second one (time series) is a problem related to AR models exclusively because the independent variables are the same than the dependent variable but lagged.

The error terms in AR models are intrinsically correlated with the independent variables, that’s why D-W statistic’s usage is not advised, better use Dicky-Fuller for example. So here we have the same fact in both cases time series and multiple regression where errors are correlated with the indep variables.

Remember that multiple regression can be a time series model too, for example: GDP(t) = b0 + b1*Consumption(t) + b2*Manufacturing(t) + e(t), this is a multiple time series regression.

Obviously the way you fix for both problems are not the same, in multiple regression (not AR models) you use robust errors calculation (White-corrected errors), and for AR models you use ARCH to detect it and you correct the problem adding more explanation variables, transforming variables or first differencing, etc.

Hope this helps.


Thanks a lot…You always help:)