multi co.

strong F stat, weak T stats. or in plain language - the independent variables as a whole are strong predictors of the dependent variables, but none are individually strong predictors

heteroskedasticity

when the error term is dependent on the independent variable, to detect simply plot indept. vs error term and see if there seems to be a patter. also you can regress the squared residuals on the indep. variables and see if the coefficient of the indep. = 0. if so there is no heteroskedasticty. also you can perform the nR^2 test which is a chi squared test under the Ho or non heteroskedasticty.

serial correlation

test with the durbin watson. if the d stat is < dl then you have positive serial correl. if its > du then you have negative serial correl. if its in between your test is inconclusive.

in AR(1) models you have to test the covariance stationary requirement which means the mean/var dont change (coeff <1). you can use the dicky fuller unit root test that creates the series xt - xt-1 and looks to see if the coeff (b1 - 1) = 0, if it does, you have a unit root. to test

heteroskedasticity in AR models can be tested by regressing the error term on lagged values of itself and seeing if the coeff on the lagged error term =0.

Ill start with that for now. if you need more clarification just post