Confidence Intervals

For whatever reason still strugging with this very basic concept as it relates to Type 1 and Type 2 errors and how it relates to confidence intervals - particularly around Investment Manager performance. The book states: Null Hypothesis (Ho) = Managers Add No Value Type 1 Error = Rejecting The Null when it’s in fact true. (Keeping Bad Managers) Type 2 Error = Failing to Reject The Null When It’s False (Firing Good Managers) Now I get that part. What I’m trying to understand is the concept of “Widening” or “Narrowing” confidence intervals. If the interval is narrowing, that would result in a lower significance level? which would mean more type 1 errors? If the interval is widening, significance level is higher and that would lead to more type 2 errors? Someone please explain this in a way I’ll remember it :slight_smile: lol

PJ, I think I explained it once,703507,703856#msg-703856

Awesome… had lost track of that post and completely forgot I had even seen it… Must have happened when I was knee deep in the thick of things… for those that don’t want to follow the link, here’s Volkovv’s reponse: #1 Narrow confidence level will cause Type I error. Narrower confidence level happenes when your level of significance is is higher. Level of significance (a.k.a. alpha) is the probability of rejecting a true Ho or in other words probability of Type I error. When level of significance is higher say 0.1 vs. 0.05, confidence interval (1 - level of significance) is smaller, 90% vs. 95%, the t-value for 90% confidence level is smaller than for 95%, by having smaller t-value, it makes it easier for you to reject Ho, so chances that you reject Ho when its true are higher, and thus chances of commiting Type I error are higher, and thus you are more prone to keep/hire a manager, when he has no investment skill.

Just remember that confidence intervals can get narrower due to either larger sample sizes in your data/history or less variability (volatility) in your sample. If these are the reasons your confidence interval got smaller, your chances of making false positive claims about statistical significance (Type I errors) are no different. The chance of making a false positive claim is just determined by the significance level (also called alpha level) you choose, and that level goes to deciding what the critical t value is. This is why it’s important to decide on a significance level before you run your test, as opposed to just checking what significance level will work to make whatever claim you want to make. If you choose a very high significance level, like 0.001 or something, your chance of false positive claims is low, but your chance of making false negative claims (type II errors) is higher. However, the chance of making a false negative is not simply (1-significance level), since there is actually a chance that you will make a correct claim, positive or negative.