# Type 1 vs. Type 2 Error

Decrease the level of significance - decrease probability of Type 1 error but increases probability of type 2 error. Can someone explain why this is please? Thanks!

Decreasing the level of significance decreases the size of the rejection region and increases the size of the acceptance region; therefore, youâ€™re less likely to reject the null hypothesis (whether itâ€™s true or not), and more likely to accept the null hypothesis (whether itâ€™s false or not).

So . . . youâ€™re less likely to reject a true null hypothesis (perpetrate a Type I error), and more likely to accept a false null hypothesis (perpetrate a Type II error).

1 Like

A crisp and concise explanation for a cornerstone topic.

If you arenâ€™t leaving enough room in the distribution to find outliers, you might be committing a Type 1 error. Conversely, if you are finding too many outliers, you may be committing a Type 2 error.

Other way round:

• If you arenâ€™t leaving enough room in the distribution to find outliers, you might be committing a Type II error. If your level of significance is 0%, youâ€™ll always fail to reject the null hypothesis.
• Conversely, if you are finding too many outliers, you may be committing a Type I error. If your level of significance is 100%, youâ€™ll always reject the null hypothesis.
1 Like