I you have any Level I Quant questions you can post them under this thread by end of day today (8 May). I will try to address as many of these questions as possible.
Thanks for volunteering!
I consistently have issues with the meaning behind a confidence interval.
As I understand it, in general, a confidence internval is usually:
Mean +/- Reliability factor(Standard error)
I have trouble understanding what this tells mean, does this mean that a 90% confidence interval, (5%*standard error) of the sample will fall within +/- of the mean? This seems to be an important stepping stone that is limiting my ability to understand hypothesis testing and how to check if a valuefalls in this range
When you have a 90% confidence interval for the mean (which is what you’ve described), it means that there is a 90% probability that the mean of the population lies in that interval (which is constructed based on the mean of the sample). There’s a 5% probability that the population mean lies in the region less than the interval, and a 5% probability that the population mean lies in the region greater than the interval.
You can also have a 90% confidence interval for a random observation (which uses the standard deviation instead of the standard error); it will be wider than the confidence interval for the mean.
To say it simply, the confidence interval is an indication of how accurate the sample is in comparision to the population mean? In this case, we’re 90% confident the sample mean falls within a 5+/- minus range of the population mean? Is this for accuracy or testing? I just don’t get how this applies…
thanks for the help