excel help

HI Guys,

I have distribution of rogue trader incident losses over hte past decade across all bank globally. The list is 130 long and has all losses from US$5m up to US$7bn. Clearly the distribution is weighted towards the lower end, but there are a couple of outliers.

I want to create a bell curve and ultimately a 99% 10day VAR from this list. How do i go about doing it?..

thanks!

You can use the histogram tool to plot the data points in various buckets based on magnitudes of the losses, and furthermore, use built in Excel functions to calculate mean, standard deviation, kurtosis, skew, or other meaningful metrics. It’s not going to look like a bell, though, given what you said about the skewness of the data.

I don’t exactly know what you mean by VAR for this data set. VAR is the expected loss at some probability percentile. It makes sense to use this metric for a continuous series of data, but not necessarily across data from different companies across different time periods.

For instance, let’s say you have rogue trader losses from BAC (large company) and Midland Savings Bank (small company). The data from one company cannot be used to imply risk for the other, since the companies are different in size and business function.

Assuming instead, you only care about the magnitude of loss at some defined percentile, you can use the NORM.INV() function, which assumes a normal distribution but which can be nested in other formulas to if you want to assume different distributions.

Are you trying to build a histogram or a bell curve using the group characteristics?

Calculate the distribution parameters for the 130 periods, including skewness and kurtosis in Excel. Then apply the Cornish-Fisher VaR expansion calculation: https://faculty.washington.edu/ezivot/econ589/ssrn-id1997178.pdf.

Thanks for this.

The data of losses are across the equities and global markets divisions across bank globally. My task is to calculate an operation risk 10 day 99% VAR across these division. I have a couple of different methods I am pursuing including a scenario analysis and a bottom up from the individual desk.

The companies of different size point is an interesting one. Given OpRisk is the failure of internal systems and controls, why wouldn’t a small company with failures be the same as a large company with failures? I don’t think i should be scaling down the losses to the size of my company…

oh man… that gives me a headache!! I should just get some quant on the case for this…

I don’t understand your methodology. Maybe it would help to clarify this. Let’s say you only have drawdown data for two big trading desks: UBS and Soc Gen. You work for a small company with 5% the trading volume. What is your method for calculating your own operational risk, based on the data from UBS and Soc Gen?

My instinct tells me to scale the UBS and SocGen losses somehow to the size of my Bank and then calculate a 99% opVar based upon the distribution of losses and the mean average.

For example UBS and SocGen have 100 operation risk losses defined by rogue trader incidents over the past 10yrs. The majority of these losses are in $1bn and a couple of outliers >$5bn, I scale these losses based upon something (staff numbers, trading volume, trading desk revenues - has to be something I have information on) and then build a distribution curve and calculate the 99% 10 day opVAR. I used this OpVAR along with a bottom up scenario to make an estimate of our own rogue trader OpVAR.

It’s quite a difficult task, as I only have a handful of data points for my own bank and I think using other Banks rogue trade data is a good starting point. I’m not even sure I should be scaling these losses as if there is a failiure of internal processes and procedures (definition of OpRisk) then any unauthorized position can be run up…

What do you think?

Your help is greatly appreciated.

Seems like you are trying to determine a base rate for rogue trader losses. I’m not so sure what you are calculating is going to be meaningful or representative of reality in any useful way? Maybe there is a way to get an outside view of this problem, but it seems like a very inside view type of analysis.

I agree with Rawraw’s opinion.

As such there is real uncertainty in VaR because we don’t know the losses in that 1%. Again by removing the outliers from the data point of SG and UBS, you might be understating the VaR of your organization even more.

Given what you have, I would just go with the normal process - calculate a representative VaR for SocGen (SG) and UBS. And base it to your bank based on # of trading desks, # of traders, # of employees, trading revenues etc. May be you can have a weighted average for each of the parameters on which you are going to compare the organizations.

Good luck!

I was thinking the same thing. How is this analysis going to be meaningful at all? Are you just performing a task given by a superior or something…

Basically yeah. I’m not a op risk guy… but just been tagged onto a project to calc the rogue trader capital that has to be held by our bank.

But if i have a data set of 130 losses across a variety of Banks, then why couldn’t I use that to scale an estimated loss for my Bank using some sort of VAR?

I agree that there are a lot of variables, which is why OpRisk is such a annoying subject.

Scale is an important concept here because while your inner statistician tells you that you can simply aggregate up the losses to some kind of systemic VaR calculation, in fact, the qualitative aspects would be paramount. For instance, a loss of $1 billion USD for Global Bank A might be peanuts, but to a much smaller Bank B, the loss could be crippling and threaten the bank’s very existence.

There are also other factors to consider. Are the banks’ regulatory capital requirements/holdings the same? Are the holding periods/trading rules set forth by the banks the same? And so on.

If you can, I would at least scale the losses as a percent of the banks’ balance sheets, which is info you can obtain from a BBerg terminal. Then, calculate a VaR based on the percentage loss relative to capital. While none of this is perfect, it is better than a naive calculation of a VaR for those 130 losses as if they were generated by a single entity.

Thanks for your help chaps.

I took the 130 data points of OpRisk losses across the range of entities and filtered for Banks, global markets / equities facing businesses, loss >$1m. This left me with 80 data points across 23 banks.

I scaled each loss down (or up) by the size of the Bank derivatives book vs my Bank derivatives book.

I took the 99% percentile of this using mean, SD, n and Z-score - assuming a normal distribution

I also used an empirical metric and took the 99% data points, so 79th largest loss.

The numbers look reasonable and are inline with what we used a few years ago.