you know how they are testing to see what is the valuation most sensitive too… well the method is corret, but i have a problem with how they determine the high and low estimates for example to go from base to high, rf was increased by 2.3% while risk premium was increase by 18%… so obviously you are gona get a bigger range with risk premium, and you gona assume the valution is most sensitive to that… so basiclly i could have shocked up the risk free rate 200%, and came and told you the valuation is most sensitive to that…?

Yes, the values they are using are arbitrary and you could get a different results when looking at another company with, for example, a large degree of variance in the Beta. However, I’m not sure if the answer will ever be the risk-free rate, since we expect r > g, and r > risk-free (where r is required). Variability is risk, so risk free should mean little to no volatility. It’s just one example, with one result of rank for a particular company.

but the point is, to compare properly should not you shock by the same percentage i guess you kinda of answered that when you said that risk free has little volatility

Oh, i get you. No, they are not testing the sensitivity of the factors in the model itself. Nor would you in a real life test. It’s not a theoretical exercise, it’s a risk analysis (at least that’s how i see it). To test sensitivity, you look at possible (probable?) high and low scenarios of each factor–not arbitrary ones. At that point, you now can rank your risk exposure. ---- ETA: As a side, the greatest factor influencing the model, i would guess would be the risk premium (which drives the required rate, r) and/or the growth rate. In the DDM formula, the denominator (r - g) is a strong leverage factor. Both are based on estimates, which makes their variability higher, as well.

thanks, i was thinking of it is a sensitivity of the model itself… lol, i guess that wont be very usefull…once we test the model once, thats it, someone will publish it in a book… so yeh, now i am gona think of it like you are, risk analysis… thanks again

I was thinking about this over my lunch break. I still stand by what I wrote regarding the problem, but thought I should put down something based on what I remembered while studying, since my wording was kinda bad. “Sensitivity analysis” is usually moving one factor up or down 1% and seeing the impact on results. E.g. discount rate changes. You are really only comparing the outcome to itself, so you can say, “if discount is 1% higher or lower, our overall NPV will be respectively 5% lower or higher”. That’s how “sensitive” our model is to the input at 1%. “Scenario testing” is when you take the best and worst case for a variable, or set of variables, so you can see the impact of extreme situations. The problem presented was, as I see, a scenario test of best and worst situations of each particular variable, and it’s individual impact. “Monte Carlo Simulation” make one variable, or many variables, stochastic (changing by assigned distribution) and allows for a set of confidence levels and probability of outcomes to be made. I surmise you just have to ask what your purpose of the test is, are you testing the model (senstivity), or are you testing probable outcomes (scenario).