Hello, Trying to understanding testing a hypothesis using the confidence interval method

I have come across questions where the H_{0} and H_{a} values are tested against the critical values to determine if there is sufficient evidence to reject the null hypothesis or failure to reject the null hypothesis. The critical values are not measured up against the test statistic. Why is that?

There is a different (t test) method of hypothesis testing when measuring the critical values against the test statistic.

I can include more granular details so you can understand the context if the above question is a little too vague. thank you.

When hypothesising the estimated population coefficient in a single independent regression model, there are two methods, the first which is through the use of a confidence interval is the one u have mentioned.

After computing the confidence interval using the formula if, the confidence interval does not include the hypothesized population co-efficient, u reject the null hypothesis.

In other words u do not need to check for the critical value to compare like when using the second method.

This confidence interval approach mind u can only be used for a single independent regression model and not a multiple regression model.

You can certainly use a CI to test a single coefficient in a multiple regression model. However, it wonâ€™t allow you to conduct the test for overall model utility (F-test for all slope coefficients) as you would in the case of a single predictor in SLR.

Instead of using critical values and a test statistic, these are convered into the original units of measure for the given variable. In other words, you could easily turn the CI endpoints into the corresponding critical t-values, lower and upper, and convert the estimated coefficient into an observed t-statistic and make the comparison that way. The benefit of not using the critical values is that you are using relevant units of measure that put more contextual meaning to the interval.