Questions from linear regression: 1. When we say regression forecasts are unbiased do we always need b0=0 and b1=1? I thought sum of residual being 0 is enough to mean forecasts are unbiased. (see Example 11: Evaluating Economic Forecasts 2 in Reading 11) 2. When we testing the null hypothesis that b1=1.0, if SSE (standard error of estimate) decreases t value increases and null hypothesis rejection chance increases. Though it comes mathematically but I thought low SSE shouldn’t make b1 significantly different from 1.0. Can anybody tell me why low SSE makes b1 significantly different from 1.0? (see Reading 11 section 3.5: Hypothesis testing)

- In time series regressions, it needs to be mean reverting, which means b0+b1=1. There are several other rules about bias in the procedures and results of time series. In regular linear least-squares regression (like a scatter-plot, not time series), the sum of the SQUARED residuals will be zero, but this doesn’t really have anything to do with bias–it’s sort of the definition of what line you’re going to fit. (Although not in the curriculumn, an alternative would be fitting a line that minimizes the absolute value of the residuals instead.)

Hi chaddy, well I don’t understand why mean reverting equation have to be b0+b1=1 and we are assuming the best fitted line is exactly in 45 degree angle. The best fitted line may be more flatter or steeper, and as long as expected value of error term is 0, the line will produce unbiased forecasts (that’s why I said sum of residuals being 0 is enough to mean the line is a best fitted one and the best fitted line should be unbiased, correct me if I am wrong). So can you help me what the first para of example 11 in page 305 means?

Unfortunately I don’t have the books and can’t look at page 305, but you are definitely confused about something I can help clear up. When we are talking about time series, we are not saying the line has to be a certain angle; you are confusing it with regular regression. Look at the equation we are generating: y_(t) = a_0 + [a_1 * y_(t-1)] a_1 is not multiplied by some random t value; it is multiplied by a previous y value! So a_0 and a_1 are essentially weights-- how much value do we place in our previous output (a_1) and how much adjusts the next value back to the mean (a_0 which is determined if a_1 is known). Let’s say our equation is y(t) = 2/3 + 1/3*y(t-1). The mean reverting level is a_0/(1-a_1) = 1. This isn’t 45 degrees, that would imply y(5) = 5 or y(6) = 6; instead we are saying if a y is different from 1, the next y should be closer to 1. I’m not sure if the mean reverting level has to be 1, it’s been a while since my test.

time series is only a small fraction of the quant section…