Type I, II Error

For a hypothesis test with a probability of a Type II error of 60% and a probability of a Type I error of 5%, which of the following statements is most accurate? A. The power of the test is 40%. and there is a 5% probability that the test statistic will exceed the critical value(s). B. There is a 95% probability that the test statistic will be between the critical values if this is a two-tailed test. C. There is a 5% probability that the null hypothesis will be rejected when actually true. and the probability of rejecting the null when it is false is 40%. They claim that the answer is C, on the basis that in A,B the null hypothesis could be false, which would make the claims invalid in options (A,B) since the probability of rejection would be unknown. How does this make sense? I do not get their reasoning. Is the probability of rejecting the null hypothesis already incorporated into the probabilities? Thanks!

A isn’t necessarily true because the null could be true or false-- the statement would need to say “assuming the null is true, there is a 5% probability…”

B isn’t necessarily true for the same reason above.

C is always true assuming the probability of each error is as described in the question-- it’s definitional for the chosen significance level (alpha, P(Type I Error)) and the power of a test (1-P[Type II Error]) [P(rejecting Ho | Ho is false)]. You may see beta instead of P(Type II Error).

I don’t think this is a good question, but it is accurate. It may be helpful to review the definitions of Type I and Type II errors as well as power of a test.

α is the probability of rejecting _ a true null hypothesis _ (i.e., the probability of a Type I error).

α _ is not _ the probability of rejecting the null hypothesis.

So does it mean Type I error defines the significance level I need in the hypothesis test.

But it doesn’t mean the test statics will actually fall out of the confidence interval of 95% with the given mean value in the null hypothesis.

Am I right?

The _ probability of _ a Type I error defines the significance you _ choose _ in the hypothesis test.

It does mean that if and only if the distribution of the (sample) test statistics is exactly the same as the distribution you use to create the confidence interval.

Thank you very much.

My pleasure.

Type I error is when you keep a manager that provides no value.

Type II error is when you fire a manager that is providing value.

Null is true, but rejected.

That’s the error for me.