# Exponential Decay in Financial Modeling

Hey BSDs!

Anyone familiar with exponential decay and why it is used in financial modeling? I’d love to know why or where to read to learn why.

Thanks

What’s there to know? You weigh more recent observations by more than older observations for the purposes of calculating means and covariances.

For example, say you are modeling the behavior of balances over time in demand deposits. Most models and the academic research use the calculated SMM and apply it the balance like you would for prepayments. So a 90% retention rate would result in the following predicted cash flows:

100/90/81/etc…

But instead, you could just hold the amount of run off constant:

100/90/80/70

The WAL and calculated “value” of these two methods are drastically different. It seems on the surface there *has* to be a reason why everyone does it the first way and not the second way. But it’s not as easy to find as I expected it to be.

I would line up my time series and program an automation function calculating at different lambda values until I had found the lambda which describes the subsequent values best, then see if it held true over various time periods. Not sure exactly what your specific industry field is, but generally, this empirical calibration seems to be the best way to go when doing decay fade-downs.

Clearly, the exponential decay method works best if your time series has some trending, otherwise weighting the more recent observations more heavily adds no value. Before setting up an exponential decay model, you may want to test your data first to see if autocorrelation or other measures of persistence exist. If your data is uncorrelated across time (or even worse, anti-persistent), then exponential decay methods will not likely add any value, and you should stick with a 1/N approach.

The specific question is from economic value of equity analysis used in banking. It’s specifically related to how to assign value to deposits that have no contractual maturity (demand deposits). Generally, what you would do is conduct a deposit decay study based on a sample of closed accounts at the institution, controlling for interest rate fluctuations (largest variable to influence account closing besides general behaviors). Then use that WAL to calculate the CPR and decay the balances exponentially. This is how I’ve always assumed it was done and am curious how to explain how it is wrong if someone tried to deviate from this method, since the results are quite different.

Yeah, sorry man, that is a very specific application and while I could help you on the math, I just don’t know the business context as to what approach would work best in that situation.

You might want to hit up the Wilmott forums…they tend to have specific threads like this and lots of helpful forum members who actually work in these niche corners of the industry. Good luck.

Like Destroyer of Worlds, my post really didn’t cover what you’re asking. I don’t know much about modeling MBS and I’m not up on the jargon. I can see why they call it exponential decay (from the equations in the wikipedia page, you would have a lambda slightly bigger than 0.1), but it’s also like a declining balance depreciation.

Another benefit of this approach: it never falls to zero. The other method will fall to zero eventually. In general, I would just expect that the industry settled on this approach for a sensible reason and not worry too much about it. It could be due to its relation to accelerated depreciation, not sure.

Why? There may not be a deep theoretical reason but it could be based on an empirical fit. If you plot historical balances over time, do you get a good linear fit? Does it improve if you plot log of balances vs. time instead? If yes, that’s good enough reason.

I’m suddenly feeling very stupid. I hate myself for clicking on this thread.

So far, I’ve found reasons to believe this is true.

I did post on Wilmott as well. Hopefully they’ll be equally as helpful as the rest of you

Thanks!