Scheme to agree on common risk drivers could help Fed benchmark risk exposures, says JP op risk expert
A project led by a group of banks to standardise the way in which they approach scenario analysis could make it easier for US regulators to compare the level of operational risk exposure each firm faces, and ultimately the level of capital each should hold, senior industry figures have suggested.
The venture, spearheaded by the American Bankers Association (ABA), seeks to identify the key drivers of material operational risks facing firms across seven scenarios, including a rogue trading incident, a cyber attack on a critical application and the mis-selling of products to retail investors. It includes a number of banks subject to the US Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR) programme, JP Morgan among them.
Scenario generation techniques allow banks seeking a forward-looking gauge of the risks they face to overcome a basic problem: a lack of readily available data. Instead, they use a family of quantitative approaches to determine their risk exposure for a particular factor – the likely degree of loss from a cyber attack, for instance – by working out what could go wrong, and how bad the consequences could be. The outputs of the analysis can then be used to inform a capital calculation, allowing the firm to put a dollar value on its risk exposure.
Under CCAR, banks are required to estimate expected losses under adverse economic conditions, which are based on scenarios the Fed itself sets, as well as losses stemming from significant yet plausible events that are idiosyncratic to the risk they face, which they set themselves according to their business mix. But at present, each bank’s approach to scenario generation is also bespoke – making it unfair to compare the numbers their analysis produces, even for the same risk factors, experts argue.
“My main problem with scenario analysis under CCAR is [that] the objective of the exercise, at least from a regulatory standpoint, is the comparability of the numbers,” said Evan Sekeris, a partner in the financial services, finance, risk and digital practice at Oliver Wyman, who was speaking during a panel debate at OpRisk North America on June 18.
“With the push to have different institutions generate scenarios using different means and at different levels, we end up with numbers that I doubt are comparable, because what is stressful in the eyes of one bank might not be in the eyes of another. So I think creating a standard for the scenarios, their drivers, will eventually help with the standardisation of the results. If it allows us to get a sense of comparability in the CCAR scenarios, that would be a huge win.”
Speaking on the same panel, Nedim Baruh, head of operational risk measurement and analytics at JP Morgan, said he was “a big fan” of the ABA’s project, adding that the element of comparability it could ultimately allow should be of appeal to regulators.
Ironically, scenario analysis was seen as partly culpable for the lack of comparability between banks using the advanced measurement approach to set operational risk capital requirements, which in large part led to its phase-out under Basel III. Whereas US regulators insisted banks choosing to use the method should follow a strict loss distribution approach, European banks have generally been permitted to make greater use of scenario generation, the outputs of which, with a regulator’s approval, could be used to demonstrate a lower incidence of potential loss for a given risk factor, and hence lower capital requirements.
Nonetheless, argued Baruh, so far as CCAR goes, if the US Federal Reserve could get comfortable that banks were at least thinking about the way scenarios were constructed in a co-ordinated manner, the watchdog could consider adopting the industry’s approach to setting common scenarios for certain risk factors, and ultimately using them to help determine capital requirements.
“If we all standardise these models, and if we standardise the drivers, maybe the Fed can use those and project what they define to be operational stress. Right now, CCAR defines economic stress, but not operational stress – it relies on us to define our own stresses, based on our own vulnerabilities. Great, I think there’s some value in that – but it sure doesn’t help comparability. But if we all have similar models with the same drivers, [the Fed] can project those drivers in a stressful operational risk environment – that’s what the definition of stress is. That’s very helpful for us, but also for them, in the context of CCAR. [Could it be used to set] capital? Over time, probably, as well – but way down the road,” said Baruh.
The aim of the ABA’s project is to produce a distribution of possible loss impacts for each risk factor it is considering, an approach that banks could use to help inform capital planning – although its initial aim is focused on helping firms involved make better business decisions, said Jane Yao, senior vice-president of the benchmarking and survey group at the ABA, speaking during the debate.
If we all standardise these models, and if we standardise the drivers, maybe the Fed can use those and project what they define to be operational stress
Nedim Baruh, JP Morgan
“You have to look at the distribution, where the loss potential is – versus ‘if we put a billion dollars here, will that reduce my loss to a more acceptable level?’ That’s the business decision right there. We’re not trying to use scenarios for capital numbers. You can use this for business decisions, going into new markets. You can play the different scenarios: ‘Is this worth getting into? What would be my exposure?’ I think that’s the beauty of going to a structured approach: you really understand what is driving the losses, [what are] the risks.”
By standardising data on the drivers of risk behind exposures, used as inputs to inform scenarios, said Yao, the project ultimately hopes to help banks benchmark their scenario outputs against one another, to get a rough idea for the first time how material their own exposure to the risk of loss from rogue trading in their equities business is, for instance, compared with their peers.
It would, said Baruh, be “a big win” if the project allowed banks to benchmark risk exposure among themselves. Benchmarking against one’s peers was also a tremendous opportunity, he suggested – “because how many times in scenario [generation] have you said ‘how likely is that [outcome]?’, because it’s hard to backtest? But if you can benchmark, you get to see if you’re an outlier – that’s important”.
For that reason, regulators should also be keen, argued Baruh: “The idea of having some kind of standard model really should be appealing to the industry – not just for the banks, who would benefit from sharing information, and, spending less time [modelling] from a resource and cost-effectiveness point of view, but also the regulators. Because when regulators start talking about standardised models – which is the reason they went to the SMA [standardised measurement approach] – well, here’s a really important way of measuring and understanding operational risk.”