Difficulty range from easy to hard, just some questions I jotted down and have been attempting to answer. Any clarification much appreciated. 1. Only ‘test’ that evaluates heteroskedasticity is Breusch Pagan? That and residual scatter plots. 2. Sole errors in misspecification are partial slope coefficients being wrong and forecasting being useless? (while errors in hetero/autocorr/collinearity mainly deal with std error and t-tests, F-tests, basically evaluating the model itself?) 3. If a model has a unit root, its a random walk. Then why do we first difference the data at all? To model it? Because if that’s the case, we know forecasting a random walk is useless. At the same time, if it’s to prove the series possesses a unit root, then isn’t the Dickey Fuller test a better way of testing for a unit root? 4. To test for stationarity we can plot the data, run an AR model and examine the autocorrelation t-stats, or do the DF test right? My question is that when we test autocorrelation t-stats, are we also testing for serial correlation simultaneously? Or are they the same? 5. If you cannot use t-stats to detect a random walk (i.e. cannot prove b1=1), then how can we use autocorrelation residual t-stats to prove it? Or are they different enough that is fine? 6. How many tests are there for serial correlation? Durbin watson, t-tests on significance residual autocorrelations in AR model…is that all? 7. When modeling a time series regression equation where both the y-var the x-var are time series variables…what’s the difference between that and multiple regression? Aren’t the x-vars in multiple regression also time series? This one is either a really subtle difference, or I’m just missing the point. Just to confirm: 1. We cannot use DW on AR models, so we use t-stats instead. So DW is only used on trend models and regression models. 2. Seasonality is just another type of serial correlation 3. Heteroskedasticity (non-constant error variance) can occur in any model. Whew! Much thanks to anyone that can help clarify. A couple things I find helpful 1. Positive serial correlation means things are “too good”, as in we’re passing off too many coefficients and tests as being statistically significant 2. I use RSS feeds to get my information, so naturally they help explain a lot. Therefore divided by the total amount of “noise”, it should provide how much of it is useful/explanatory. 3. Multicollinearity is like a bad but smart work group. All can do something important but they bicker and clouds true contributions. Need to fire (drop a variable) someone to figure out what’s going on. 4. Apply n-k-1 for everything, don’t try to memorize n-2 for simple regressions 5. (stolen, lol) I’m hetero = BP. Thanks, this one really helps it drill in Hope this helps people looking for some help (feel free to add questions) and those looking to reaffirm their knowledge. Thanks in advance.