can someone pls help?

**Unbiased** : the expected value of the statistic equals the value of the parameter it estimates.

**Efficient** : of all unbiased estimators, it has the smallest sampling error.

**Consistent** : as the sample size increases, the sampling error decreases.

**4.1.1_. The Consequences of Heteroskedasticity_**

What are the consequences when the assumption of constant error variance is violated? Although heteroskedasticity does not affect the **consistency** ^{31} of the regression parameter estimators, it can lead to mistakes in inference. When errors are heteroskedastic, the *F*-test for the overall significance of the regression is **unreliable**.^{32} Furthermore, *t*-tests for the significance of individual regression coefficients are unreliable because heteroskedasticity introduces bias into estimators of the standard error of regression coefficients. If a regression shows significant heteroskedasticity, the standard errors and test statistics computed by regression programs will be incorrect unless they are adjusted for heteroskedasticity.

In regressions with financial data, the most likely result of heteroskedasticity is that the estimated standard errors will be underestimated and the *t*-statistics will be inflated. When we ignore heteroskedasticity, we tend to find significant relationships where none actually exist.^{33} The consequences in practice may be serious if we are using regression analysis in the development of investment strategies. As Example 7 shows, the issue impinges even on our understanding of financial models.

I don’t understand. The model is consistent but unreliable?

**4.2.1_. The Consequences of Serial Correlation_**

As with heteroskedasticity, the principal problem caused by serial correlation in a linear regression is an incorrect estimate of the regression coefficient standard errors computed by statistical software packages. As long as none of the independent variables is a lagged value of the dependent variable (a value of the dependent variable from a previous period), then the estimated parameters themselves will be **consistent** and need not be adjusted for the effects of serial correlation. If, however, one of the independent variables is a lagged value of the dependent variable—for example, if the T-bill return from the previous month was an independent variable in the Fisher effect regression—then serial correlation in the error term will cause all the parameter estimates from linear regression to be **inconsistent** and they will not be valid estimates of the true parameters.^{45}

**4.3.1_. The Consequences of Multicollinearity_**

Although the presence of multicollinearity **does not affect the consistency** of the OLS estimates of the regression coefficients, the estimates become extremely imprecise and **unreliable**. Furthermore, it becomes practically impossible to distinguish the individual impacts of the independent variables on the dependent variable. These consequences are reflected in inflated OLS standard errors for the regression coefficients. With inflated standard errors, *t*-tests on the coefficients have little power (ability to reject the null hypothesis).

What’s wrong with that?

Consistency means that as the sample size gets bigger, the sampling error gets smaller. It says nothing about whether the estimate is biased or not.

Keep in mind that “unreliable” isn’t a really technical word like “consistent.” Unreliable, more or less, means you can’t really trust it in some cases, where as inconsistent (or consistent) has a specific technical definition that fits what S2000 said regarding the standard error of a statistic. These two terms are not interchangeable (I point this out since there seems to be a bit of attention on those words in the excerpt you posted).

I think the poster just needs to remember that the common language use of “(in)consistent” and “(un)reliable” are different from the statistical meanings, which these words are very different in that setting. Unreliable need not be about bias-- if it were, we would just use the word “bias”. For example, to contrast “bias” and “unreliable”: MC can cause unreliable coefficients because they can have swings in magnitude and direction upon refitting the model with omitted observations or variables, but bias isn’t present because of MC.

S2000magician: castintosea:

4.1.1_. The Consequences of Heteroskedasticity_What are the consequences when the assumption of constant error variance is violated? Although heteroskedasticity does not affect the

consistency^{31}of the regression parameter estimators, it can lead to mistakes in inference. When errors are heteroskedastic, theF-test for the overall significance of the regression isunreliable.^{32}Furthermore,t-tests for the significance of individual regression coefficients are unreliable because heteroskedasticity introduces bias into estimators of the standard error of regression coefficients. If a regression shows significant heteroskedasticity, the standard errors and test statistics computed by regression programs will be incorrect unless they are adjusted for heteroskedasticity.In regressions with financial data, the most likely result of heteroskedasticity is that the estimated standard errors will be underestimated and the

t-statistics will be inflated. When we ignore heteroskedasticity, we tend to find significant relationships where none actually exist.^{33}The consequences in practice may be serious if we are using regression analysis in the development of investment strategies. As Example 7 shows, the issue impinges even on our understanding of financial models.I don’t understand. The model is consistent but unreliable?

What’s wrong with that?

Consistency means that as the sample size gets bigger, the sampling error gets smaller. It says nothing about whether the estimate is biased or not.

Unreliable need not be about bias-- if it were, we would just use the word “bias”

Good point.

Thanks.

For now, I will consider them as violations of assumptions.

Thanks.

For now, I will consider them as violations of assumptions.

Note that collinearity, unless perfect, is *not* a violation of an assumption.

Got it, thanks