Statistical signficance in mult-regress

In a regression, I assume low t-stat means the model is good and the factors all “fit” the model.

But when a t-stat is large and the curriculum says the coefficient is statistically significant (rejected based on t-stat / different than zero), what does that actually mean? Does that mean the coefficient does not fit the model and should be replaced with something that will produce a lower t-stat?

The null hypothesis in the t-tests is that the coefficient is zero. If the calculated t-statistic is (too) low, then you cannot reject H0, and if a slope is, in fact, zero, it says that that independent tells you nothing about the dependent variable. However, is the calculated t-statistic is high (enough), you reject H0 and conclude that that independent variable does tell you something about the dependent variable. You want high t-statistics, not low ones.

The t-statistics don’t tell you whether the model is good or not. R2 measures how well the model fits the data.

So you want high t-stat for the coefficient.

I think I was mis-understanding it because one of the qbank questions in Schweser asked:

“The table below includes the first eight residual autocorrelations from fitting the first differenced time series of the absenteeism rates (ABS) at a manufacturing firm with the model ΔABSt = b0 + b1ΔABSt-1 + ε_t. Based on the results in the table, which of the following statements most accurately describes the appropriateness of the specification of the model, ΔABSt = b0 + b1ΔABSt-1 + εt_?”

Then it listed a table with very low t-statistics, and the answer was: The low values for the t-statistics indicate that the model fits the time series.

So, is a low t-statistic good for a time series, or only in a first-differenced time-series.

Are we talking about multiple regression (as in the title), or autoregression (as in your last post)?

Originally was Multi but I think I was confusing the interpretation of t-statistics in multi with t-statistics in autoregression.

No matter what the model is, simple linear regression or a time series model with serial correlation, metric

Z = [coefficient] / [standard error of the coefficient]

is important. If the absolute value of Z is high, this indicates statistical significance and the coefficient should stay in the model. If the absolute value of Z is low, the coefficient is non-significant and is a candidate for being dropped from the model… In the linear regression setting, under the null hypothesis Z has t-distribution. Therefore, it is called “t-statistic”… In many other linear and non-linear models, Z has standard normal distribution and is called “z-statistic”, “Wald statistic” or nothing… In many other models run on relatively small data sets, Z has complicated distribution. But it is still important and can be used for calculating the p-value and assessing statistical significance of the coefficient in question.

Name “t-statistic” is just a name and even for the same model different statistical packages may print a different title inside the output table.

Is not the t-test for multiregression model a two tailed test, if the t-test statistics lies between the upper and lower degrees of freedom, then we reject the null hypothesis, i.e the independent variable contributes in explaining the independent variable?

Yes

No. If t-statistic (or other test statistic) lies beyond the critical value(s) (not DF), then we can reject H0.

If T value is greater than the positive T critical value, or less than the negative t critical value then you reject the null. What does it mean when you reject the null is a different question that is answered differently depending on what you are trying to find out.