You can estimate the population mean of the sample by calculating sample mean. However this estimation is not exact and is itself a random variable with its own parameters of distribution. Mean of the distribution of sample mean is equal to the population mean. Standard deviation of the distribution of sample mean is calculated with central limit theorem and is equal to σ/n^0.5 if you know the standard deviation of original distribution (σ) and s/n^0.5 if you know only sample standard deviation (which is calculated with n-1 formula).
Central limit theorem has more to do with the distributional property of the mean of a large number of independent variables. The mean and variance of the mean can be easily calculated but the real miracle is that the distribution of the mean is normal. http://en.wikipedia.org/wiki/Central_limit_theorem Your question has more to do with the properties of variance (particularly, variance of the mean) as well as unbiased estimation of sample variance: http://en.wikipedia.org/wiki/Variance I hope the links help.
Maratikus has this and I encourage you to check out those links. There are a few things here that are mixed up: a) The variance of the sample mean is sigma^2/n. This is really easily shown just using the definition of variance and some eighth grade algebra. It doesn’t rely on the central limit theorem. It is always true as long as sigma exists, i.e., the variance of the population is finite. b) An unbiased estimator of the variance of a population is sum(x[i] - x-bar)^2/(n-1). This (n-1) thing in the denominator is going away when I rewrite all the statistics books in the world. It confuses so many people for some small and asymptotically non-existent benefit in estimating the variance. The scoop is that all the observations are likely to be closer to their sample mean than the population mean because the observations themselves were used to calculate the sample mean. It turns out that if you are estimating the variance, the way to make this systematically go away is to divide by n-1 instead of n. An interesting thing to note is that as soon as you take the square root of the thing and use the sample standard deviation (which is used much more frequently than the sample variance), you aren’t unbiased anymore. To get an unbiased estimate of the standard deviation you need to use this thing that contains a gamma function that you only see in books. The (n-1) thing is just silly… c) The central limit theorem states that the sample mean is normally distributed. This is a serious “wow” theorem. The underliers can have any old wacky distribution they want but the sampling distribution of the sample mean is normal (Gaussian). This is one of those theorems that makes mathematicians believe in God ordering the universe so it is much more profound than “the variance of the distribution of the sample mean is σ^2/n”. I suggest you contemplate the theorem until you have a moment when you say “That can’t be true, but - holy cow - it is true”. BTW - I would say that huge areas of risk management would not be possible without the central limit theorem. For example, I can take tons of different kinds of securities and put them in a portfolio and if I’m willing to believe some assumptions (and use a fancier version of the CLT than is given in the CFA curriculum), I can assume that the return of the portfolio is normally distributed. That means I can calculate VaR limits, for example. Note that I might have a portfolio full of reverse Joeyed quanto rainbow Margrabe swap options that have some unknowable return distribution but the distribution of the returns on the portfolio is known and relatively simple. That’s pretty amazing…
“Take several samples from the population and distribute them they would be normally distributed” This is what Central Limited Theorem says. We use n-1 in denominator for calculating variance of a single sample. Here the variance is different probably due to the fact that its of multiple samples.