There is the theory that the use of the entire number of sample observations, n, instead of n - 1 as the divisor in the computation of the sample variance, will systemically underestimate the population parameter. That underestimation causes the sample variance to be a biased estimator of the population variance.

Why is using n - 1 instead of n as an deminator improving the statistical properties of the sample varinace as an estimator of the population variance. Is there a mathematical (calculus) explanation or is there any intuition behind it?

Just find the Schweser description for that not very helpful to understand it.

In practice when you work with big samples the distinction is of no practical importance. And in case it would be of importance you would not be up to good anyway since most probably you will be running calculations or performing hypothesis testing with a marginal data set.

For the CFA curriculum you should however be very aware of the differences and know when to apply N and N-1.