I’ve been asked to build a product that performs a Monte Carlo analysis to plot the expected distribution of outcomes in terms of the Funding Ratio of a pension plan. My thought was to: 1. Generate a normally distributed random deviate with an expected return and standard deviation equal to an equity return. 2. Generate a uniformly distributed random number between 0 and 1. Use this number to generate a random walk to simulate the movement of risk free interest rates based on rules such as: If the random number is between 0 and .1 then interest rates fall by .5%. If the random number is between .1 and .3 then interest rates fall by .25%. If the random number is between .3 and .5 then interest rates remain unchanged, etc… 3. Generate a normally distributed random deviate with an expected return and standard deviation equal to the distribution of corporate credit spreads in a time series. Use this to add to our risk free interest rate to determine a yield on corporate bonds. 4. Decide on a duration of our fixed income investments. Use the change in yield calculated in step 3 and the expected yield on corporate fixed income to determine our total return on fixed income. 5. Calculate our total asset return by weighting the equity and fixed income return by our portfolio weights. Use this to calculate our pension asset value. 6. Calculate our pension liability value by taking the NPV or our projected liability cash flows discounted at our new corporate credit yield adjusted for the difference in duration between the fixed income asset and the pension liability. 7. Plot the funding ratio as Pension assets/Pension liabilities. 8. Repeat the process for the next 4 - 5 years. 9. Repeat that process 5,000 to 10,000 times to explore multiple scenarios. 10. Calculate the 5th and 95th percentile of funding ratio outcomes for each year. 11. Repeat for different asset allocations and different fixed income duration assumptions and plot the changes in the distribution of outcomes. So does this seem like a reasonable progression of steps to get to a distribution of outcomes or does it look like the output will be ridiculous?

so this 2. Generate a uniformly distributed random number between 0 and 1. Use this number to generate a random walk to simulate the movement of risk free interest rates based on rules such as: If the random number is between 0 and .1 then interest rates fall by .5%. If the random number is between .1 and .3 then interest rates fall by .25%. If the random number is between .3 and .5 then interest rates remain unchanged, etc… would be generated from some empirically determined probability density function? I think it sounds like a good plan… will you be considering correlations? you might want to first model in @Risk so you can get an idea of how your output might look and what’s doable etc.

#1-if you are thinking of the equity as a random price process that follows geometric brownian motion (i.e. same standard assumption as BS), then the mean of your normal random variable should be the expected equity return minus half the variance #2,#3 - possibility of negative risk-free rates, negative spreads, no? you can still keep it simple by working with the log of interest rates to remove the possibility of negative rates, and the next small step is to consider a mean-reverting process otherwise you may see the other extreme - unreasonably huge interest rates. definitely correlations between 1,2,3 will play a big role, but if you can estimate the correlation coefficient it’s not so hard to generate correlated normal r.v.

DoubleDip Wrote: ------------------------------------------------------- > > > > would be generated from some empirically > determined probability density function? I think > it sounds like a good plan… will you be > considering correlations? you might want to first > model in @Risk so you can get an idea of how your > output might look and what’s doable etc. This is what I am not sure of yet. I was actually considering making the interest rate moves somewhat dependent on the equity returns (basically making a really big and dirty assumption that the equity return represented an unbiased indicator of the overall health of the economy) such as if equity returns fall below 2 standard deviations of the mean then interest rates would be lowered by 25 to 50 bp and if it fell 2 standard deviations above the mean then interest rates would rise by 25 to 50 basis points. Obviously after playing with the numbers and calculating no correlations what so ever between equity movements and interest rate changes I abandoned that idea. If I can flush out some relationship somewhere I will use it or else I will make interest rate movements completely independent of anything else. I do need to find the PDF of interest rate changes like you said.

Mobius Striptease Wrote: ------------------------------------------------------- > #1-if you are thinking of the equity as a random > price process that follows geometric brownian > motion (i.e. same standard assumption as BS), then > the mean of your normal random variable should be > the expected equity return minus half the > variance > > #2,#3 - possibility of negative risk-free rates, > negative spreads, no? you can still keep it simple > by working with the log of interest rates to > remove the possibility of negative rates, and the > next small step is to consider a mean-reverting > process otherwise you may see the other extreme - > unreasonably huge interest rates. > > definitely correlations between 1,2,3 will play a > big role, but if you can estimate the correlation > coefficient it’s not so hard to generate > correlated normal r.v. Regarding the possibility of negative risk free rates, yes I was going to implement a floor on the risk free rate. Since my company is too cheap to buy a decent statistical package I am putting this all together in Excel and then I will probably automate it using VBA so that anyone in my company can use it. I was planning to just use the MAX function in the cell to return a minimum value of .25% for a risk free rate. I shouldn’t have a problem with unreasonably huge interest rates because I am only modeling out about 5 years. I will definitely be looking at the correlations between #1 and #3, basically assuming that the equity return represents the most extreme form of credit risk. Not sure about # 2 because I can’t find the relationship. (If you know of one please let me know) I am using the polar form of the Box-Muller transform to generate my normally distribution random numbers. It generates a pair of random deviates that I can introduce correlations between. Not sure about your first comment. Yes it is brownian motion but (as I consult wikipedia to refresh my memory on geometric brownian motion) I am looking at absolute returns rather than just the incremental change in returns (did I get that right). To put it simply I have a sheet full of uniformly distributed random numbers that I plug into my Box-Muller function (as least I will…currently just using Excels built in NORMINV function), with a mean and stdev that I plug in from our company capital market assumptions.

if your stock price X follows GBM, and has expected annual return m, and annualized volatility v, then logX is normal with mean (mu-v^2/2)*T, and standard deviation v*sqrt(T) the key point is that if your expected continuously compounded return for the stock is mu, when you model the distribution of the log-returns for the stock, it will be normal with mean equal to mu minus half the variance, not just mu. look up GBM and you’ll see that 1/2 term everywhere, it is probably somewhat counter-intuitive for the risk-free rates, if you’ll be looking at a historical time series of interest rates to extrapolate the parameters of your distribution, then you’ll break that relationship forward by throwing in a floor function. what i was suggesting is, look at a historical time series of the log of interest rates instead and extrapolate the distribution parameters from there, then model the evolution of the log of interest rates going forward. to get the actual interest rate you’ll just take exp() of your modeled series which will always be non-negative of course

Mobius Striptease Wrote: ------------------------------------------------------- > if your stock price X follows GBM, and has > expected annual return m, and annualized > volatility v, then logX is normal with mean > (mu-v^2/2)*T, and standard deviation v*sqrt(T) > > the key point is that if your expected > continuously compounded return for the stock is > mu, when you model the distribution of the > log-returns for the stock, it will be normal with > mean equal to mu minus half the variance, not just > mu. look up GBM and you’ll see that 1/2 term > everywhere, it is probably somewhat > counter-intuitive > > for the risk-free rates, if you’ll be looking at a > historical time series of interest rates to > extrapolate the parameters of your distribution, > then you’ll break that relationship forward by > throwing in a floor function. what i was > suggesting is, look at a historical time series of > the log of interest rates instead and extrapolate > the distribution parameters from there, then model > the evolution of the log of interest rates going > forward. to get the actual interest rate you’ll > just take exp() of your modeled series which will > always be non-negative of course I guess I am trying to not go too crazy with the math here (mostly because it confuses me). I am setting up the asset side in excel to look at the value of assets at the end of each month. Over that month the assets are subject to investment experience (total return as a function of equity return and fixed income return) and plan level experience (contributions made by the plan sponsor and distributions paid out as benefits to the participants). So my assets are being compounded monthly. At the end of 12 months we assume that the actuary values the liabilities in the annual valuation report as the NPV of the projected liability cash flows discounted at a rate that is dependent on the interest rate environment (corporate yield curve) and the timing of the cash flows (duration). Based on this method of valuing the assets and liabilities, I’m not sure why I need to even get into GBM. The compounding takes care of itself. Am I missing something important here?

Mobius Striptease Wrote: ------------------------------------------------------- > > > for the risk-free rates, if you’ll be looking at a > historical time series of interest rates to > extrapolate the parameters of your distribution, > then you’ll break that relationship forward by > throwing in a floor function. what i was > suggesting is, look at a historical time series of > the log of interest rates instead and extrapolate > the distribution parameters from there, then model > the evolution of the log of interest rates going > forward. to get the actual interest rate you’ll > just take exp() of your modeled series which will > always be non-negative of course I see what you are saying here. Shouldn’t be hard to do.

All I am saying is, if I was modeling the distribution (1 year from now) of the return for some stock whose drift is assumed to be say 15% (which I might have estimated via CAPM) and whose annual volatility is 40% (which I might have estimated as the standard deviation of historical returns, or using implied vol, whatever) - I would do normal distribution with mean 7% and standard deviation 40%. I wouldn’t do a normal with mean 15%, st dev 40%. The 7% comes from 15%-40%^2/2. If my stock price is $100 today, the expected stock price 1-year from now is $100*exp(15%)=$116, as one would expect. The median stock price 1-year from now is $100*exp(7%)=$107. The continuously compounded returns for the stock will be symmetrically distributed around the median of 7%. Obviously using 7% vs 15% can make a big difference, especially compounded over a period longer than 1 year

I’m a bit confused. What is the time step that you are using? When you say you’ll repeat the process 5000 to 10,000 times, is this the time steps or the number of trials? Anyway, assuming that your algorithm is correct, you need to incorporate a few things. Otherwise, the output will be completely wrong. 1) Forward/swap curve for corporate spreads. You seem to be assuming constant growth for spreads. Your simulation needs to take market expectations into account. Otherwise, risk will be mispriced. 2) You need to use a volatility surface (local vol?) for equity vol drift. Furthermore, the price drift should be the risk free rate minus implied dividends. You can’t use expected equity return in a risk neutral MC model. 3) Why is the RF rate discreet not continuous? Where are you getting your assumed jump frequencies? Yeah, I think there’s a lot of work to do.

ohai Wrote: ------------------------------------------------------- > I’m a bit confused. What is the time step that you > are using? When you say you’ll repeat the process > 5000 to 10,000 times, is this the time steps or > the number of trials? > > Anyway, assuming that your algorithm is correct, > you need to incorporate a few things. Otherwise, > the output will be completely wrong. > > 1) Forward/swap curve for corporate spreads. You > seem to be assuming constant growth for spreads. > Your simulation needs to take market expectations > into account. Otherwise, risk will be mispriced. > > 2) You need to use a volatility surface (local > vol?) for equity vol drift. Furthermore, the price > drift should be the risk free rate minus implied > dividends. You can’t use expected equity return in > a risk neutral MC model. > > 3) Why is the RF rate discreet not continuous? > Where are you getting your assumed jump > frequencies? > > Yeah, I think there’s a lot of work to do. 5,000 to 10,000 times is the number of trials. I’m working on getting my mind around equity vol drift. Mobius has opened up the can of worms on that one (googling the heck out of it as we speak). RF rate changes are discreet just because that seems to be the behavior of the fed (i.e. 25 or 50 basis point changes). The jump frequency will be random because there will be time periods when the risk free rate will stay the same. Yes there is lots of work to do!

I think you’re thinking of the neutral federal funds rate. This is the thing that the Fed targets and is currently 0-0.25%. The true “risk free” discount rate, i.e. the rate that you should use to discount future payments, is continuous. It’s best represented by LIBOR, or swap rates if the maturity > 12 months.

as far as I understand, he’s calculating distributions of funding ratios in the future, he’s not pricing derivatives and there is no discounting involved. he doesn’t need a risk-neutral MC, the drift should be the expected return local volatility is an overkill for this. he’ll do fine with constant volatility a-la B-S. there is a need for some clean-up but lets not push it

My explanation sort of jumped over some steps. Yes the 25 to 50 basis point moves are describing the neutral fed funds rate. From that I derive the risk free rate and then add a spread on top of that to get my yield. That yield will effect total return based on the duration of the fixed income asset and the duration of the pension liability. So yes there is discounting involved as it pertains to the liability because it is being defined as the PV of all future liabilities discounted at the hypothetical yield on investment grade corporate bonds with a similar duration.

Oh and what are you referring to when you say B-S?

It would be really awesome if we could some how post an excel workbook on here so that people could see what it was we were trying to do.

by B-S i meant Black-Scholes - you are assuming your annualized volatility of equity returns is constant, right. by no discounting I meant, you are using the distribution of equity returns to get to the distribution of values of fund assets at the end of a period, which you use to find the distribution of funding ratios? ultimately if you are interested in future distributions (of fund assets, funding ratios), then you wouldn’t do a risk-neutral simulation of equity returns i.e. your drift would be the expected return rather than the risk-free rate

Mobius Striptease Wrote: ------------------------------------------------------- > by B-S i meant Black-Scholes - you are assuming > your annualized volatility of equity returns is > constant, right. > > by no discounting I meant, you are using the > distribution of equity returns to get to the > distribution of values of fund assets at the end > of a period, which you use to find the > distribution of funding ratios? ultimately if you > are interested in future distributions (of fund > assets, funding ratios), then you wouldn’t do a > risk-neutral simulation of equity returns i.e. > your drift would be the expected return rather > than the risk-free rate ahhh crud…now I gotta brush up on Black-Scholes (haven’t had to look at that in at least a few years). Yes I am assuming constant annualized volatility in my equity returns.

jg, you already got a lot of great suggestions but I still don’t think you will get realistic results. You are working on a difficult project. I understand that you have to make simplifying assumptions but I’m afraid you are over-simplifying the problem. If I were you, I’d play with historical data. For example, if you analyze auto-correlations you will see mean-reversion in credit spreads just as Mobius pointed out (O-U process would be appropriate). You can also find conditional heteroscedasticity in equities. Even if you adjust for that, standardized returns will be non-normal. I don’t want to discourage you but I think your model will have to be a little more complex in order to get somewhat reasonable results. Good luck!

maratikus Wrote: ------------------------------------------------------- > If I were you, I’d > play with historical data. I second that recommendation. Google “historical simulation” – most hits will involve VaR, where this technique sees a lot of use. If you have Hull’s book he covers the basics. For better or worse, recent history includes a lot of movements that, prior to 2007, might have been thought extreme. So without needing to go too far back in history you should easily be able to generate a broad (conservative) distribution of outcomes.