Tracking Error - Variance vs. MAD


I am taking the CFA leve l 1 in June 2016. Anyways, while reading the statistics section for Schweser, I found a question I was stumped on:

It asked what measurement should be used to track a portfolio that it attempting to mimick returns on the market. I thought the answer was MAD of tracking error (mean absolute deviation). They said this was correct, but variance/standard devation of tracking error was a better measurement.

Can anyone explain to me why this is the case? I am confused and what love to know the answer, (intuitively of course).

Intrinsically, there’s nothing wrong with using MAD, and there’s nothing inherently better about standard deviation or variance.

The only reason that statisticians use σ and σ² is that they can do calculus with them, whereas they cannot do calculus with MAD.

Statisticians love calculus.