Confidence Intervals

Hello all, Ran into some confusion around Confidence Intervals. If you’re using Kaplan’s books, they are first introduced in the section on Normal Distributions on pg 213 of Book 1. It lays out the formula for calculating confidence intervals as: Mean +/- Degree of Confidence * Standard Deviation Example from book: Average return for a fund is 10.5% per year, standard dev of returns is 18%, normal distribution. They calculate 95% confidence interval as: 10.5% +/- 1.96 * 18% = -24.78% - 45.78% My confusion is that later in the book (specifically pg 247 and onward), confidence intervals are also calculated on normal distributions with known means and standard deviations in a very different formula. Mean +/- Degree of Confidence * Standard Error At first I thought this was simply because the first example was using for a known (perfectly symmetrical) distribution, so there would be no need to account for a standard error - but if you review the formulas inputs, the second formula will inherently produce a smaller, more conservative range. There must be an obvious rule I’m overlooking, because I want to confirm when to apply each formula on the exam. Thank you all

the standard error is used when the sample size is known (SD/√N) and use SD only when it is not.

Thanks, much appreciated.