Estimation error in expected returns has been estimated to be roughly 10 times as important as estimation error in variances and 20 times as important as estimation in covariances.
How to think of this? Expected return is 10 times sensitive to variance…
Quite simply - as per 6.1.2. The Importance of the Quality of Inputs - “the most important inputs in mean–variance optimization are the expected returns”.
In a nutshell, relying on an incorrect covariance estimate (say estimate is 1% off) causes the optimization to be off by a certain quantum (i.e. the covarianvce estimation error forces the optimization to include x% more equity than what it would have been appropriate i.e. of what the optimization would have contained had we known the “true” covariance parameter).
Now, if that 1% error affects the return estimates (as opposed to the covariance estimates) the repercussion on the optimization is that the equity portion may now be 20x% larger than what it should have been given the “true” return parameter.
In other words a 1% estimate error in returns is 20 times more damaging for the quality of a given optimization than 1% estimate error in covariance.
Good luck, Carlo
Anyone else has an explanation for this? I think I still don’t understand why. Thanks!