Confidence intervals; standard error vs standard deviation

Hello, please excuse my English

Could someone tell me why we don’t use the confidence interval formula (point estimate ± (reliability factor × standard error)) here and instead we use the standard deviation directly?

The question is from schweser l1 practice exam4, QM, Q30.

I assume we should use standard error in this situation because we want to estimate the population mean.

Best regards,

They have a sample of means, not a sample of individual returns.

1 Like

where did the square root of n go?

I dont know if you still need any opinion or not but I think they use standard deviation instead of standard error because they have sample of returns and they want to know the dispersion of those values.
In case they have sample of means (from sampling distribution) and they want to indicate how the mean measurement of that sample deviates from the mean of population, then SE should be used.
Sorry if my answer causes any confusion :sweat: