Active Risk and the Information Ratio

I’m having a difficult time wraping my head around something about the way active risk (AR) is computed.

All along, I thought AR could be computed either way and the result would be the same

  1. the difference between the portfolio STDEV and the benchmark STDEV , or,
  2. the STDEV of the differences between the individual observations.

It turns out the result isn’t the same. It’s usually close, but not exact.

Two is the correct way, according to the text, but 1 is the way I think it should work. I know I’m wrong about this, but I just don’t get the reasoning behind it.

Say my portfolio has an STDEV of 12% and the passive benchmark STDEV is 10%. Isn’t the extra risk exactly 2%? After all, my portfolio is 2% riskier, right?

Why ISN’T #1 the correct way?

#1 isnt the correct way because the portfolio and benchmark may not always move in lockstep i.e. not perfectly correlated. For e.g. even if the portfolio return deviates 1 stdev (12% in your example) the benchmark may not deviate at all from its mean. Mathematically, variance (X-Y) = variance (X) + variance (Y) - 2 covariance (X,Y) Hence stdev (X-Y) = stdev X - stdev Y if their correlation is 1.

But why do we care about the correlation? My portfolio is 2% more risky even though the returns don’t move in lockstep with the b/mark. If standard deviation of nominal returns captures total risk why concern ourselves with the variability of the differential?

the correlation affects the denominator (the standard deviation of the active returns). It is not actually 2% more risky - it may be more or less, depending on the “correlation”.

To illustrate my point, let’s look at two managers and one benchmark over a 3 year period. Assume that the benchmark’s returns are 10, 11 and 9. STDEV is therefore 1 Manager A’s returns are 10, 9 and 11. STDEV is also 1 and the active risk is 2. Manager B’s returns are 10, 12 and 8. STDEV is 2 and the active risk is 1. Both managers deliver identical positive alpha. According to the information ratio, we should prefer manager B to manager A, but manager B has greater absolute risk. Why do we prefer B?

numerator is going to be 0 in each case 0 + -1 + 1 = 0 (for B) -> Active Risk = 1 0 + -2 + 2 = 0 ( for A) --> his Active Risk = 2 since numerators are the same - and B has a lower Active Risk - he would be the better one. How did you say “identical positive alpha”? is the alpha 0 in both cases?

Oops, my bad! Manager A’s returns should be 11,10, 12 STDEV = 1, Average return = 11, IR = 0.5 Manager B’s returns should be 11,13, 9 STDEV = 2, Average return = 11, IR = 1 Both managers now have positive alpha of 1. Manager A has less risk but a lower IR while manager B is the opposite. So why do we prefer B?

You would use IR pretty much like a Sharpe Ratio. So the one with the higher IR is better - in this case B - or am I confused?

B is risker than A. The standard deviation of B is 2 while the figure for A is 1. Thus, we should prefer A. Yet, because of the way that the active risk is calculated, we are led to prefer B. If instead active risk was based off of the aggregate difference between the portfolio STDEV and the benchmark STDEV, we prefer A, which delivers the same alpha at lower risk.

The IR doesn’t tell you how risky your actual portfolio is, it tells you how risky your active returns are - or put differently how much your portfolio returns vary from the benchmark returns. Think of it like this: If you have a risky benchmark like say some international stock benchmark, we would expect the returns to vary quite a bit YoY. However, say your portfolio is full of 1 year treasuries. That’s not going to track the benchmark very well even though you would expect it to have lower total risk than a portfolio of foreign stock (which would TRACK your benchmark better). The portfolio of treasuries would actually have a higher tracking risk, and so a lower IR than the foreign stock portfolio. When you look at the active returns of your A and B portfolios above for each year, you see a greater volatility of the return deviations in portfolio A (+1,-1, and 3) than with portfolio B (+1,+2, and 0). that’s why you would choose B, because even though it has greater total risk, it tracks the returns of the benchmark better. (lower tracking risk, therefore higher IR)

FinNinja, I totally get what you’re saying in the last paragraph. As I read into the CFA text it corroborates this as well – active risk works off the variability of those excess returns. Where I’ve gone sideways is in thinking that IR is a measure of units of total excess risk for units of alpha. It is, sort of, but not exactly.

Another example: Imagine your portfolio has a realized average return of 5% and a risk (stdev of returns) of 2%. Your benchmark also has a 2% risk. The absolute difference in risk is 0%. However, you can imagine two very different cases: 1. Every period, your portfolio return equals your benchmark return. The tracking error, or active risk, is 0% in this case. 2. Every period, your portfolio return = 10% - benchmark return. [Essentially, the benchmark return is generated by flipping the sign of the portion of portfolio return that fluctuates around 5%. For example, if your porfolio returned 5.5% in one period, the benchmark would return 4.5%.] Realized tracking error is now 4%, since correlation is -1. This 4% risk is a more meaningful number than just the difference between 2% and 2% = 0%. The active return is 0, so the IR is 0, but you can make this example a bit more complex to get a non-zero IR.

I get what you’re saying with the two examples, and it’s consistent with the text. Still, assuming #2, the difference in total risk between the manager and the benchmark is the intuitive answer. In evaluating and active manager, the question I want answered is, how much total risk is being assumed for the additional return I hope to get?