Banks warned off machine learning for model risk

By Louie Woodall | News | 17 August 2017

Banks acknowledge they “cannot hide behind a complex tool” to assess interconnectedness

US regulators are raising concerns about the use of machine learning techniques to assess contagion risks in bank model networks.

Last year, certain entities supervised by the US Federal Reserve were asked to analyse their aggregate model risk – essentially the interactions and dependencies between various risk and pricing models. Banks responded by experimenting with advanced computational techniques to understand model interconnectedness, including machine learning, network theory and probabilistic graphical models.

However, regulators are cautioning banks that these approaches lack transparency and could obscure the true extent of their vulnerabilities.

“Regulators want transparency and you cannot hide behind a complex tool,” says Nikolai Kukharkin, a senior risk manager and the former global head of model risk management and control at UBS. “There is always concern if you don’t understand something and lose the intuition behind it. If you apply machine learning to assess model risk, this algorithm is itself a model and you have to prove that it works.”

In conversations with banks, the Fed has said it does not want them to develop sophisticated approaches that generate singular measures of their aggregate model risk, as these can be misleading.

“There is a reason the regulators don’t want banks to rely exclusively on sophisticated mathematical models,” says Shaheen Dil, managing director at consultancy Protiviti. “The purpose of understanding [aggregate] model risk is to try and understand the risks posed by the sophisticated models that banks have built. If you put another sophisticated model on top of that to estimate the aggregate model risk of each of these individual models, you are actually adding another layer of uncertainty.”

The Fed declined to comment.

Supervisory guidance on model risk management issued by the Federal Reserve in 2011, known as SR 11-7, calls for aggregate model risk to be monitored and assessed alongside the risks posed by individual models, but does not spell out how this should be achieved in practice.

“SR11-7 is a principles-based rule, and I’m not sure if the Fed has an appetite to come up with a more prescriptive rule,” says Lourenco Miranda, head of model risk management for Americas at Societe Generale. “However, they are asking through individual supervisions how the models in isolation are connected with each other.”

These bilateral conversations have acted as a spur for banks to ramp up their aggregate model risk assessments. Wells Fargo is one dealer that has improved its view of model interconnectedness following the Fed’s prompting.

“[The Fed] asked us: do we have a good understanding of the connectedness of our models? At that point, a year and a half ago, it was not as thorough as we might have wanted. Since then, we have in subsequent reviews with the regulator gone over our overall methodology,” says Misty Ritchie, a senior vice-president in Wells Fargo’s model risk management team. “If they ask us that question again we can say yes, we have a very good understanding. I do suspect that at some point they are going to come back to get more of an in-depth review.”

Insurers blind to new threats, network analysis suggests

By Alexander Campbell | News | 4 August 2017

Risk taxonomies driven by top-down approach or externally imposed labels expose firms to blind spots

Existing ways of categorising the emerging risks faced by insurers might not be fit for purpose, new research suggests, and could lead to firms underestimating or missing entire areas of the risk landscape.

In research due to be published in the Journal of Network Theory in Finance later this year, Christos Ellinas, a research fellow at Bristol University, used a bottom-up approach to analyse insurers’ assessments of the emerging risks they face.

Ellinas used a list of 143 individual emerging risks, collected from 15 UK insurers and reinsurers by risk data consortium Oric. Each risk was tagged with some of a list of 24 descriptors, such as ‘customer service’ or ‘outsourcing’. Based on similarities in the tagging, Ellinas was able to draw up a network diagram of all 143 risks and found they fell naturally into five clusters: ‘large-scale events’, ‘regulation-related’, ‘cyber-related’, ‘economy-related’ and ‘political-related’.

“This bottom-up approach contrasts [with] typical risk classification schemes, where a top-down approach is generally adopted, building on externally imposed labels based on a particular organisational function, such as ‘strategic risk’, or a regulatory requirement,” says Ellinas, adding that the modules are more than simply an artefact of the tagging system.

Ellinas then checked how many risks in each module had been identified by each of the 15 insurers involved. If the modules bore no relation to how emerging risks were actually identified or missed, then the distribution of these risks across modules would be more or less random, just as it would for a truly exogenous classification such as alphabetical order. But, in fact, ‘risk blindness’ tended to be concentrated in a few of the five modules.

“No firm is able to uniformly identify risk across all five modules; there is a consistent bias across firms to specialise in identifying risks of a particular nature. Taken to the extreme, this bias essentially corresponds to a firm’s ‘horizon scanning’ being limited to particular modules,” Ellinas wrote.

Some firms failed to identify any emerging risks from some of the five modules, being essentially blind to that entire area, he argues.

Previous studies have looked at the networks of causal connections between operational risks before: Risk.net’s own top 10 operational risks list can be divided into an operational and a regulatory grouping, for example; and the World Economic Forum’s (WEF) 2017 Global Risks Report included a network diagram that grouped the risks identified into ‘economic’, ‘environmental’, ‘geopolitical’, ‘societal’ and ‘technological’ clusters.

Overall, a general mismatch exists between the independent and systemic impact across the most influential risks, which indicates the assumption of risk independence obscures the emerging nature of risks

Christos Ellinas, Bristol University

However, Ellinas argues the latter analysis was limited, as respondents to the WEF survey were only permitted to suggest three to six links between pairs of risks, which biases the shape of the resulting network.

Ellinas also used his network diagram to produce predictions of the systemic impact of each emerging risk; a risk was considered more likely to trigger other similar ones, as represented by the strength of the network connection; and a risk with close ties to many other similar risks was considered to have a high systemic impact.

As the 15 reinsurers were also required to class the individual impact of each item on the list as ‘high’, ‘medium’ or ‘low’, Ellinas was able to compare individual and systemic impacts. He warns: “Overall, a general mismatch exists between the independent and systemic impact across the most influential risks, which indicates the assumption of risk independence obscures the emerging nature of risks.”

This approach also makes it possible to identify the most systemically important firms – those exposed to the greatest number of risks with potentially systemic consequences. Ellinas suggests this information could be used to “highlight possible collaborations” between insurers by focusing on the firms exposed to the largest number of systemically important risks, and where a failure in risk management is most likely to trigger an industrywide cascade of similar risks at other firms.

Modelling cyber risk: FAIR’s fair?

By Tom Osborn | Opinion | 3 August 2017

Proponents say factor analysis can be applied to cyber risk; detractors retort results are still guesswork

Of all the potential loss events banks’ operational risk managers are tasked with trying to model, cyber attacks are among the most challenging.

Firstly, say op risk managers, there’s the sheer range of cyber threats banks are exposed to, and the wide variability in the frequency with which they occur. Distributed denial-of-service attacks, viruses and email phishing threats are everyday occurrences for a bank; successful ransomware attacks and data breaches that result in large data thefts are – for now – relatively uncommon.

Modelling the frequency of any potential op risk loss event is difficult; but practitioners argue this is especially true for cyber, for three reasons.  

“Firstly, I think there is a non-linear relationship between controls and losses, as the controls are only as good as the weakest link,” says one senior op risk manager, citing the examples of staff turning off anti-virus software to download an attachment, or responding to a convincing-looking phishing email.

The relationship could hold the other way, too; a large, sophisticated bank could have inadequate cyber defences – but provided it is perceived to be strong, there is evidence to suggest it is less likely to be a target for cyber attack. The op risk manager cites recent payment network frauds being concentrated on emerging market banks as an example.

Finally, a bank cannot model its exposure to a so-called zero day attack – one that exploits an unknown vulnerability in its cyber safeguards, for which by definition it has no defence.

The loss impact of any of these events is also highly variable. For example, regulatory fines for poor systems and controls processes in the event of a data breach will be set at the discretion of supervisors; banks subject to the European Union’s forthcoming General Data Protection Regulation could be whacked with fines of up to 4% of their annual turnover in the event of a serious breach, or 2% if they simply fail to notify their regulator within 72 hours.

Other losses – ransom payments to cyber thieves, compensation to affected customers, loss of future business due to reputational damage – are also difficult if not impossible to quantify with any accuracy. Small wonder, then, that Rohan Amin, chief information security officer at JP Morgan, describes trying to model the loss a bank can expect from a particular cyber event as “at best, a guess”.

More than a decade after it was first applied to modelling cyber risk, the most commonly used approach to quantifying cyber threats among banks remains the Factor Analysis of Information Risk (Fair) model. The approach provides a straightforward map of risk factors and their interrelationships, with its outputs then used to inform a quantitative analysis, such as Monte Carlo simulations or a sensitivities-based analysis.

Many underwriters are not doing any underwriting at all. They’re simply saying, ‘this is my price for the [policy] limit’. It’s rather scary

John Elbl, Air

Proponents say the approach helps banks order and prioritise their defences against the myriad threats they face; detractors say its outputs are only as reliable as the inputs, which, due to the nature of the threats in the case of cyber risk, are inherently based on guesswork.

Shorn of a way of predicting losses accurately, banks may look to the traditional risk-transfer medium of insurance – though underwriters have long struggled to model the potential impact of cyber threats too. Modelling techniques have evolved rapidly in the past couple of years, firms say; it is now common for underwriters to tap the services of catastrophe modelling firms – companies more used to assessing potential losses from natural disasters – as well as niche cyber security firms, who can use a range of covert techniques such as ethical hacking to assess a potential clients’ defences.

However, amid swelling demand from banks for cyber cover, some fear underwriting standards have gone backwards: “Many underwriters are not doing any underwriting at all. They’re simply saying, ‘this is my price for the [policy] limit’. It’s rather scary,” John Elbl, cyber risk expert at Air, a catastrophe modelling firm, tells tells Risk.net.

Banks are all too cognizant that insurance can only ever be a loss mitigant, and not a defence against a potentially existential threat. As Gilles Mawas, senior expert in cyber, IT and third-party risk at BNP Paribas, recently put it: “Being reimbursed after you’re dead is irrelevant. If you lose €3 billion–5 billion ($3.4 billion–5.6 billion) and two years later you get back 50%, what’s the point?”

Cyber insurers under fire over lax underwriting claims

By Alina Haritonova | Features | 28 July 2017

Loss data becoming more granular and diverse, but critics highlight pricing inconsistencies among underwriters

These should be the best of times for cyber insurers. Soaring demand for their product coupled with increasingly sophisticated modelling of potential threats ought to mean coverage is more widespread – and more accurately priced – than ever before. But the increase in competitiveness between insurers these dynamics have driven has not been wholly positive for the market, many argue; in fact, many say underwriting standards have gone into reverse.

In the past, insurers often hired external teams to evaluate the IT systems and cyber security of potential clients. But multiple sources – at clients, advisory firms and underwriters themselves – say the auditing of larger policy buyers before coverage is offered which was common five years ago is now virtually unheard of.

“If we look across different cyber insurance providers, there is no consistency in what they’re offering, what they cover, how they structure it, if they require any risk assessment or audit review. It feels like they’re just issuing the paper right now, making the money, and will figure it out later,” says Evan Wheeler, head of information risk management at MUFG Union Bank.

Experts at cyber security firms and catastrophe modelling providers – the latter more used to assessing potential losses from events such as hurricanes and natural disasters, now increasingly frequently being tapped to advise on potential cyber losses – have also raised a red flag.

“Many underwriters are not doing any underwriting at all. They’re simply saying, ‘this is my price for the [policy] limit’. It’s rather scary. There is a lot of that happening right now,” says John Elbl, cyber risk expert at Air, a catastrophe modelling firm that provides cyber intelligence.

Show me the money

Underwriting cyber is an increasingly lucrative and competitive business. The volume of cyber insurance premiums grew by 74% per year between 2013 and 2015 in the UK, according to a recent report from London Market Group, a trade body, and Boston Consulting Group.

The explosive growth in affirmative cyber insurance cover – policies that explicitly include coverage for cyber risk – has spurred increased scrutiny by regulators, who are already monitoring the sector’s exposure to ‘silent’ or non-affirmative cyber underwriting, in which cover is implied in other policies but not specifically included or excluded. In a recent supervisory statement, the UK’s Prudential Regulation Authority (PRA) urged insurers to proactively monitor their cyber exposure.

Other watchdogs are understood to be monitoring the PRA’s actions closely, fearful that the current underwriting boom, coupled with insufficient oversight among insurers of their exposure to risks contained in existing policies, could increase the industry’s potential exposure in the event of a major global cyber attack, risking widespread losses across the sector.

Brokers and underwriters, for their part, cite the unwillingness of banks and other large organisations to go through a lengthy and burdensome series of checks when buying cover. Policies are increasingly prepackaged to make the process quicker for both sides. In practice, this has often meant doing away with full IT audits.

“As more companies have started buying cyber insurance, there’s a bigger pot of premium for insurers – and a lot of them look at cyber as their main organic growth area now – so brokers can use those forces to make it easier for a client to get more [cover]. Clients don’t want that intrusive process in the first place,” says Sarah Stephens, head of cyber, content and new technology risks at insurance broker JLT.

Sarah Stephens, JLT

Experts at catastrophe modelling firms have made similar observations: “Audits put clients off. The marketplace is so competitive: there are so many companies offering direct commercial cyber policies – around 70 in the US – so if one company starts asking a lot of questions, it may be easier for the risk manager to just go somewhere else,” says Elbl.

Others argue that, even if the underwriting process itself has become less intrusive, risk management standards remain robust, however.

“Insurers are inherently very conservative, and where they don’t understand risk, they typically take the conservative view,” says Tom Harvey, product manager within the cyber division of RMS. “A year ago many insurers were uncertain about the systemic nature of cyber, so they were holding capital equivalent to the sum of the potential exposed limits. They were hypothesising that everything could potentially go wrong. Our models quite clearly show that the probability of this happening is extremely low, well beyond the return periods that most insurers manage their business to.”

Underwriters look at the probable maximum loss (PML) to determine the proportion of the exposed limit that can be lost under a realistic scenario. For natural catastrophes, it can be between 1% and 10% of the exposed limit, with the PML of a California earthquake being around 7%. One expert at a catastrophe modelling firm relates that, in the context of cyber, a typical PML that insurers are exposed to is between 3% and 5% of the total exposed limit. That indicates that some insurers could be holding excessive amounts of capital against potential losses, which in turn increases the price of premiums.

Risk at a premium

So if underwriters aren’t pricing policies according to the risk they think an institution is exposed to, what methods are they using? Underwriters say premiums paid for cover differ hugely depending on a wide range of factors, including the buyer’s industry, turnover, and the organisation’s perceived level of cyber security.

“It can vary widely based on limit, retention and specific coverage requests, but in general the price per million dollars of cover is between $10,000 and $25,000,” says Mark Camillo, head of professional liability and cyber for Europe, Middle East and Africa at AIG.

Many note the irony that risk modelling practices for assessing cyber exposure have steadily improved in recent years, even as underwriting practices have become less rigorous. Though their usefulness is hotly disputed, actuarial-developed models such as the Factor Analysis of Information Risk (Fair) approach that many financial services firms use when assessing cyber threats have been around for more than a decade.

 

 

In the past, insurers were more likely to employ a range of techniques as part of onsite security audits and vulnerability testing of potential clients, for example bringing in ethical hackers who sought to exploit an IT system of a company in order to determine its points of weakness and security patches.

These practices have been superseded by face-to-face client meetings and calls between underwriters and senior practitioners from a bank’s risk and IT functions, say observers. Over the course of such a meeting, underwriters will assess the size of the company, the industry it operates in, the amount and type of personally identifiable information processed by the firm, turnover and income, as well as the controls around IT security the company has in place. The aim is to help index the company’s total exposure in case any of the information or systems get compromised.

Meetings are supplemented by proposal forms, which in the case of Lloyd’s syndicates offering cyber cover, can run to around 80 detailed questions. “The [Lloyd’s] application form that underwriters use addresses various areas of process and procedures, as well as acting as an information gathering tool.  The questions are revised to reflect threat conditions and changes in regulations,” says Tom Draper, technology and cyber practice lead at Arthur J. Gallagher.

Major insurers insist such checks are sufficient: “It’s a good way of assessing the controls they have in place. More importantly, you can get a sense of how joined up they are to one another,” says an IT practice lead at a large US insurer. “If a policyholder were to go down due to a cyber-related issue, what would they lose in the way of net income? What would the expenses be on average to get back up and running? How severe are losses going to be compared to their counterparts in other industries and the same industry?”

Mark Camillo, AIG

However, some brokers say such questions give the insurer at best a proxy view of a bank’s true exposure to cyber risk, rather than an institution-specific view.

“The general things that underwriters are looking at are turnover, industry, number of employees, customers, which can tell how much data you are exposed on. Beyond that, there’s no standard on how it’s underwritten at all right now,” says the head of cyber analytics at a large British insurance broker.

Cyber security experts argue that face-to-face meetings without a more formal assessment fail to equip insurers with sufficient information to build an accurate pricing model that would take into account the risks that a client may be exposed to.

“All of that information just helps underwriters defend themselves in the course of a claim. And you have this vicious circle: though some underwriters do, some don’t know what questions are the important ones, [and] some don’t know how to model it even if they think they do have some important questions,” says Ryan Jones, director of cyber risk intelligence at insurance broker BMS. “The problem is to do with the models in the first place. I don’t think that there is a lack of data. Even if you don’t have a huge claims database, you can still use modelling, figure out what’s important logically. There are estimations and forecasting techniques that existed for 50–60 years that are statistically relevant and accurate that you can use to get insight from seemingly less data than you would normally be accustomed to.”

The data deficit

A lack of usable cyber loss data to assess risks and help inform an accurate pricing model is an age-old complaint within the insurance industry. But in theory, there should never have been a better time to model cyber risk, even in the absence of lengthy historical data that exists in other disciplines, necessary to make plausible projections about losses from cyber events.

Fair remains the most commonly used tool for quantifying cyber risk, based on a methodology developed by Jack Jones, a former chief information security officer at Nationwide Insurance and co-founder of vendor RiskLens, in the early 2000s.

Its proponents say the approach has helped them analyse and map threats, weaknesses and potential impacts of cyber events in a consistent way. However, the methodology has also faced criticism for being overly reliant on subjective, qualitative inputs, which detractors say leads to less precise and actionable outputs. That makes this approach more suitable for application to standalone losses, many argue. and not the tracking of an organisation’s overall cyber exposure – and hence less useful for assessing the threat cyber poses to a large bank, for instance.

It feels like they’re just issuing the paper right now, making the money, and will figure it out later

Evan Wheeler, MUFG Union Bank

Establishing whether difficulties in modelling cyber risk stem from the lack of models or data is a chicken and egg situation, according to some cyber security experts.

“Usually the sophistication of the model follows the data. It’s probably more a question of data,” says Tom Conway, principal at EY and leader on cyber risk measurement and cyber insurance. “There are existing models that insurers could use – they certainly deal with risky sophisticated coverages like this. Underwriters need data to come up with pricing of a single policy, and then they’re very concerned about aggregation, so they need the data to establish the linkages between different companies, where they’re using common vendors, for example.”

Underwriters’ concerns over data are not without foundation. Market watchers point out that, even where claims data exists going back to the early days of policy cover in the late 1990s, the evolving nature of the threat from cyber attacks – and the accompanying shift in what is explicitly covered under policies – means that even data from five years ago may no longer be relevant when assessing current and future threats.

Affirmative action: how policies have developed

The first affirmative cyber insurance policies evolved in the late 1990s from technology errors and omissions or professional indemnity insurance, a form of liability insurance that seeks to shield service providers from expenses arising from negligence claims made by a client and damages awarded as a result of a lawsuit.

These policies bear little resemblance to what is available on the market today. Early affirmative cyber risk policy products were largely focused on the privacy liability elements of the cover to insure against liabilities organisations could face had they spread a virus, as well as business interruption losses.

The introduction of a mandatory cyber breach notification regime in the US in the early 2000s was also a major spur for policy development. Under this regime, which varies from state to state, entities that suffer a cyber breach are obliged to report it to customers and, in some cases, regulators – prompting firms that hold and disseminate vast amounts of customer data such as banks to turn to first-party insurance to cover a defined loss amount resulting from a breach.

First-party insurance cover was later introduced for expenses related to customer notification, identity monitoring, regulatory fines and has over time expanded to accommodate forensic investigation expenses, legal costs, remediation call centre costs, as well as reputation management.

All-encompassing cyber cover is still a way off, however, banks complain. Insurers counter that fully comprehensive policies would require chunks of fresh data, as well as more sophisticated models – something the industry is still working on.

Doubts over data cast a shadow over the ability to effectively model risk. “Modelling is an area that the insurance industry is heavily focused on. Two years ago the discussion was about whether cyber risk was something that could be modelled. A year later, it became a more practical conversation, from ‘can we model’ to ‘how can we model’,” says Jonathan Laux, head of cyber analytics at Aon Benfield, Aon’s reinsurance business.

The growing use by underwriters of specialist cyber intelligence and catastrophe modelling firms to supplement their in-house know-how has been a critical part of this evolution, says Laux. The former carry out technology-based outside-only evaluations aimed at estimating a company’s exposure based on all of its observable features.

Jonathan Laux, Aon

“They rank the susceptibility of companies to an attack based on publicly available information and searches on the dark web, monitoring the number of times a company’s name was mentioned on the dark web, social media, gathering the mentions of the company by its disgruntled employees, the software a company claims to be using. Having gathered all this data, cyber intelligence companies are able to provide insurers with information on their insured’s risk exposure,” says one cat-modelling industry veteran.

Cyber specialists such as Symantec collect data on breaches and attacks, which they offer to insurers, as well as developing their own actuarial tools tailored to the needs of insurance companies involved in cyber underwriting. Other specialist intelligence firms focus on offering underwriters access to information on the exposures of their clients – ones they might often be unaware of.

Alex Heid, chief research officer at cyber intelligence firm Security Scorecard, says: “When a company applies for a policy, underwriters get the security score of the company and its downstream supply chain. We collect as much data as possible and we will contextualise and colour code it to convey risk. We’re looking at 10 different areas of risks – network security, web application security, hacker chatter, social engineering, for example. Underwriters will use this data to provide more granular models.”

Insurers are increasingly using catastrophe modelling firms to track the service providers of their insured parties, which helps some underwriters fill in the gaps in their client assessments. This knowledge allows underwriters to do better portfolio-level assessments of the risks their clients are exposed to. Once insurers are aware of the potential exposures, they can factor this information into cyber models.

“Most insurance companies do not explicitly ask their policy holders about the service providers they use. Frequently, risk managers may not even be aware of all the service providers they have. Their responsibility is to protect the company, buy an insurance policy, and make sure everything is in order,” says Air’s Elbl.

RMS, another catastrophe modelling company, moved into cyber risk three years ago. The majority of its cyber insurance clients use its services to understand the accumulation of risks across their portfolios, it says. RMS provides access to historical incidents, insurance claims data and other information on the exposure of the insured party. In the case of a bank, for example, that could include an assessment of its digital assets, its financial reliance on the functioning of its online banking network, as well as assessments of its third-party service providers.

Increased awareness of a client’s exposures doesn’t necessarily lead to increases in premium prices, however, says RMS’s Harvey. Over the past six months, some cyber insurance newcomers have turned to RMS for help in modelling the price of premiums earlier in the underwriting process, he notes, as they started on-boarding new clients.

“Both data breaches and business interruption are difficult for insurers to model, especially as there’s a limited claims history. The business interruption coverage has exposures over the past 18–24 months. Large insurers do have good claims data and a way of modelling this risk, but this is by no means consistent across all the companies in the cyber insurance market,” he says.

GDPR to boost cyber crime reporting

Tough new European data protection standards are proving a huge burden for banks to implement, but insurers are hoping one element of the rules – the requirement to a report a data breach to regulators within hours of it occurring – will prove a boon when it comes to bolstering data on losses.

One of the burdensome aspects of the General Data Protection Regulation (GDPR), which comes into force in May 2018, is a requirement for all firms to report to their regulator within 72 hours any data breaches in which the personal records of individuals are compromised. Failure to comply with this requirement can result in a fine of 2% of an organisation’s global turnover.

GDPR has acted as a wake-up call to the industry, which was previously unwilling to face the pervasiveness of cyber risk, urging some banks, initially sceptical of this product, to turn to cyber insurance with a view to use it as a liquidity management tool.

“From a money and resources perspective, [cyber insurance] means that you might be able to get some of the money that you paid in fines, back,” says one London-based operational risk specialist at a non-European bank.

For the insurance sector, GDPR with its mandatory breach notification requirement represents an opportunity to get access to a wider pool of data-breach related records, as the demand for cyber breach cover grows in response to the new regulation.

One source at a national regulator tells Risk.net they hope the rules will help bolster current underreporting of cyber breaches.

“GDPR has raised the focus of board members on data breaches. We’ve seen a lot of companies that have been historically indifferent towards using insurance as a risk transfer mechanism now seeing it as a valid tool, a complement for existing structures,” says Tom Draper, technology and cyber practice lead at Arthur J. Gallagher.

Tom Draper, Arthur J. Gallagher

The UK Information Commissioner’s Office (ICO) and UK-based insurers have discussed ways in which anonymous breach information sharing could help the insurance industry.

“A vibrant market in cyber insurance may encourage organisations to adopt better cyber security practices as they look to mitigate the risks arising from a cyber attack, and reduce the cost of premiums,” writes an ICO spokesperson in an e-mail to Risk.net. “We’ve had discussions with the insurance industry in order to understand how aggregated data could help insurers to better understand cyber risks and trends.”

Once GDPR comes into force in May 2018, regulators will start collecting breach data which they could then present to insurers in an anonymised format, providing them with more historical data.

“When GDPR takes effect and more companies are required to notify in the event of a breach, it would be very helpful for the insurance industry to get some data back from the regulator, even in an anonymised format, to be able to better understand the type of attacks and the frequency of incidents, so they can better protect themselves,” says Mark Camillo, head of professional liability and cyber for Europe, Middle East and Africa at AIG.

Data breach information-sharing initiatives already exist in the US, where data breach notification laws in most states were implemented in the early 2000s. The data provided by Privacy Rights Clearinghouse, a non-profit consumer organisation based in San Diego, California, is known to be used for modelling by insurance brokers, such as Marsh.

“Insurers are reluctant for both confidentiality and competitiveness reasons to share their loss details; you have a lot of uncertainty over how to price risk and nobody wants to share that, but actually that would be very valuable for the growth of the overall industry,” says Sarah Stephens, head of cyber, content and new technology risks at JLT.

Bailout obsession holds back US CCP resolution regime

By Joanna Wright | Features | 25 July 2017

Dodd-Frank leaves legal uncertainty, but proposed alternatives could be even worse

Since 2008, bailout has been a dirty word in the US for Democrat and Republican politicians alike. But the zeal with which Jeb Hensarling, Republican chair of the House Financial Services Committee, is hunting down any hint of a risk of bailout is uniting regulators and market participants in the worry the concept could be taken too far. Above all, while its application to banks may be justified, efforts to extend a bank-style resolution regime to derivatives central clearing counterparties (CCPs) might increase systemic risk, critics warn.

“CCPs are clearly not banks. Their assets change with market moves and transactions being novated to clearing, and their value will change with every variation margin run,” says Ulrich Karl, head of clearing services at the International Swaps and Derivatives Association.

In the same vein, a CCP in distress after a clearing member default looks very different to a bank in distress. For a CCP, a risk management entity, failure means the ongoing inability to achieve its state of normal equilibrium – the matched book of derivatives positions. Unlike a failing bank, a CCP may be perfectly well-capitalised immediately prior to a member default. Measures that would help a failing bank would not help a failing CCP, so resilience tools must be different.

“For example, tools like transfer of business to other entities will seldom be an option as there are likely to be no comparable CCPs with the same clearing membership. Also, CCPs do not require as much capital for their daily business, so requiring a large amount of total loss absorbing capacity would not be efficient,” says Karl.

Regulators clearly share these concerns. The Chicago Federal Reserve’s senior policy adviser, Robert Steigerwald, and its vice-president of financial markets, Robert Cox, say in an April 2017 discussion paper that policymakers have viewed CCP risk management and resilience through the lens of bank regulation since 2009, putting resolution plans on the wrong path.

Chasing waterfalls

US CCPs are subject to standards derived from risk management and recovery principles for market infrastructure drawn up by the Committee on Payments and Market Infrastructures and International Organization of Securities Commissions in 2012. Oversight of CCPs is bifurcated between the Commodity Futures Trading Commission (CFTC) and the Securities and Exchange Commission (SEC), largely by asset class.

Since the end of 2016, CFTC-regulated CCPs, so-called derivatives clearing organisations (DCOs), have had to implement these internationally derived principles, maintaining prefunded financial resources sufficient to withstand the default of their largest or two largest clearing members. These prefunded resources include measures CCPs have in their default waterfall – client margin and guarantee funds. Beyond those, CCPs can call on tools that have proven somewhat controversial globally, such as the haircutting of variation margin payments to members and the tearing up of some member positions.

But the US does not have a statutory CCP resolution regime, such as the one currently being drafted in the European Union. The design of such a regime has to take into account the fact that if such a resolution were required, it would probably be in the context of a crisis of the order of Lehman Brothers defaulting. A failing CCP, at least in the context of participant default, is a CCP that cannot return to a matched book. Given the level of cross-membership at the major global CCPs, moving positions from one CCP to another might not be a viable solution in the event of widespread distress.

You couldn’t just look at taking one struggling CCP and dealing with it by turning to the guarantee fund and then moving its positions to other clearing houses, assuming it would wind up well

Joel Telpner, Sullivan & Worcester

As Joel Telpner, a partner at law firm Sullivan & Worcester, says: “If we are talking about a market-wide disruption that is affecting all CCPs, you couldn’t just look at taking one struggling CCP and dealing with it by turning to the guarantee fund and then moving its positions to other clearing houses, assuming it would wind up well.”

In such a case, it is most realistic that the failing CCP would be put through orderly resolution under Title II of the Dodd-Frank Act, he says. This is the same Title II that applies to global systemically important banks; it empowers the Federal Deposit Insurance Commission (FDIC) to act as receiver of the failing entity. Herein lies the root of concerns about failed CCPs being treated according to the same principles as failed banks. 

“In theory, we would turn to Title II in the event of a CCP failure, but at the same time, in this scenario, some of the clearing members themselves may be subject to procedures under Dodd-Frank, not just the CCP. And then CCPs become too big to fail and you assume some kind of public bailout,” says Telpner.

Who is in charge?

Many in the derivatives community assume, as Telpner does, that Title II would be invoked for a CCP’s default. Indeed, regulatory sources say this is probably what would happen, but that does not necessarily mean Title II provides a legally certain route for CCP resolution. Darrell Duffie, professor of finance at Stanford University, says: “It’s not even legally clear who is the resolution authority. Some people presume it’s the FDIC, but that has never been clarified.”

Darrell Duffie

In fact, not even Title II’s applicability to CCPs is clear. Title II provides for orderly resolution of “financial companies” and in some cases this definition is difficult to apply to a CCP.

As the Chicago Fed’s Steigerwald and New York Fed associate David DeCarlo set out in another discussion paper in September 2016, the issue boils down to “determining whether the CCP engages in actions the Federal Reserve would determine as financial in nature or incidental thereto for purposes of section 4(k) of the Bank Holding Company Act”. A CCP’s function as an intermediary is ambiguous and could be interpreted either way, as financial or not, they write, although there is precedent supporting the conclusion that at least some CCPs do qualify as financial companies.

For CFTC-regulated CCPs, liquidation under Chapter 7 of the bankruptcy code is another possibility. DCOs, along with futures commission merchants (FCMs), qualify as “commodity brokers” under this chapter. On the other hand, they do not qualify for Chapter 11 reorganisation because they are not classed as debtors. In turn, the measures applicable to DCOs cannot necessarily be applied to SEC-regulated clearing houses focused on securities-based swaps.

Yet another option for CCPs is a manoeuvre called “equitable receivership”, which is sometimes used in the case of insolvent FCMs or SEC-regulated broker-dealers. Equitable receivership has the advantage of empowering the CFTC to initiate an involuntary insolvency proceeding against a CCP. Under Chapter 7, liquidation has to be voluntary on the part of the CCP; involuntary proceedings are possible, but only if instituted by creditors of the CCP, and the CFTC would not count as a creditor.

New class of ‘too big to fail’

The real question about the use of Title II, however, seems to rest more on political will than on a consideration of these alternatives. Title II and Dodd-Frank in general have come under heavy criticism from the Republicans. But it seems unlikely the Treasury secretary would refuse to put a CCP through orderly resolution if there were some cataclysmic event, even under this administration, say regulatory sources.

Title II, however, might not be around forever. In the name of ending ‘too big to fail’, Hensarling’s Financial Choice Act would repeal Title II and replace it with an amended Chapter 11. Similarly, it would also repeal Title VIII, the chapter that allows the government to designate CCPs as systemically important financial market utilities (FMUs), granting them access to borrowing through the Fed’s discount window. 

In an article sent to The New York Times, which Hensarling says refused to publish it, he writes: “The Financial Choice Act also repeals Dodd-Frank’s Title VIII, which created a new class of ‘too big to fail’ entities known as financial market utilities, and with it access to the Fed’s discount window, and it reforms the Federal Reserve’s 13(3) emergency lending authority to significantly reduce its potential use for bailouts while making sure it still exists for liquidity needs.”

Were Title II to be revoked, it is likely that CCPs would be left with Chapter 7 liquidation, says Steigerwald. “In the case of DCOs such as CME, Ice Clear Credit and Ice Clear US, [Dodd-Frank repeal] points in the direction of Chapter 7 of the bankruptcy code,” he tells Risk.net.

“There undoubtedly would be co-ordinated action between the CFTC and the CCP, where the CCP presents itself for liquidation. There is some point at which it becomes apparent to the CCP that recovery has failed and at that point it probably wouldn’t dispute with the CFTC the merits of going into Chapter 7. If my analysis is correct, the CFTC could force the CCP into liquidation through the equitable receivership if it had to.”

Experts have already raised the possibility of bankruptcy as a potential solution for CCP failure, but Duffie argues it would be an inherently unsuitable way of winding up a CCP.

“The bankruptcy code has very limited application in this area because while the CCP, as a corporation, has debt you could cancel in bankruptcy, those debt obligations of a CCP are tiny in comparison with the obligations of its members to each other. The amount of money involved is more by orders of magnitude,” says Duffie.

“Even if you eliminated all the debt of the CCP operator in bankruptcy, you could not handle even a tiny fraction of the obligations of the clearing members to each other through the CCP contracts they hold. The CCP is arranging these contracts – it’s not really a significant partner in the contracts,” he adds.

Mike Crapo, chair of the Senate Committee on Banking, has indicated his intention to draft his own legislation that will not simply reproduce the Choice Act wholesale. But Hensarling is not isolated in his attitude towards the resolution provisions in Dodd-Frank.

Telpner says: “There is a lot of concern that the whole process of orderly liquidation is probably so complicated that it wouldn’t work. I think both Democrats and Republicans are now starting to look at Title II as they look at Dodd-Frank in general, questioning that this is the best way to go or whether they should be amending the bankruptcy code.”

So while the Choice Act in itself might not constitute much of a threat to Dodd-Frank, it is emblematic of ideas that do.

Broken windows

Equally troubling is that Hensarling seems to have conflated access to the Fed’s discount window (under Title VIII of Dodd-Frank) with taxpayer bailouts. CCPs that are FMUs have access to Fed accounts in which they can deposit client collateral, protecting these funds from custody risk, says Scott Hill, chief financial officer at Ice. He testified before the House Agriculture Committee on June 27 that this facility should be made available to all CCPs and argued that access to the discount window would mitigate liquidity risk.

Referring to his testimony, he tells Risk.net that CCPs depositing collateral at the Fed should not be controversial. Fed liquidity provision would then mean accessing the discount window to turn US Treasuries into dollars for liquidity purposes, rather than representing unsecured liquidity lines.

Scott Hill

“People have conflated broad discount-window access with a bailout because they think the thought is: we are going to take in all these uncertain securities and we are going to loan out dollars to the clearinghouses. But this is not what we have advocated for. In some quarters, the perception is that access to the Fed equals bailout – nothing could be further from the truth, especially for the limited purposes we are talking about,” says Hill.  

This stance has the support of Isda. Karl does not expect to see comprehensive CCP recovery and resolution rules in the US, but says something must replace Title II’s liquidity provisions if they are revoked.

“The Title II benefit is that the CCP can have liquidity support. Liquidity support is crucial in a resolution and is not the same as a bailout. In the US, the Federal Reserve could provide liquidity support to certain CCPs that have been designated as systemically important under Title VIII of the Dodd-Frank Act, but to date it has not committed to doing so. Liquidity support would be cash versus high-quality collateral that is eligible for Fed discount-window access, not unsecured loans,” says Karl.

Policy incentives

Steigerwald appeared to support market participants arguing in favour of Fed liquidity provision at the June CFTC meeting. He said it would be “extremely beneficial” for the authorities to explain “there is a broader social value to clearing”. He could not see what resolution authorities alone could bring to the situation that is an improvement on existing plans for CCPs. Above all, tools to enable a CCP to survive are preferable to the alternative, he argued.

“I talk about a thing; it’s akin to jump to default risk – jump to resolution risk,” Steigerwald said. “We surely don’t want that. Under conditions we must accept as binding, such as the absence of public funding for solvency to restore a troubled market infrastructure, I think we are stuck with recovery. I don’t know what any of the current proposals about resolution add to what the clearing houses have already embedded in their rules, together with the co-ordination and co-operation [of the members], and the natural incentives of the clearing members to preserve the value that is reflected in the book.”

In the 2017 paper that Steigerwald authored with Cox, the two recommend policymakers consider that a CCP’s capital cannot be a primary, or even significant, resource for loss absorption without altering the incentive structure embedded in the default waterfall. Increasing the CCP’s own capital, or skin in the game, would change the business model of the CCP and no longer incentivise members to contribute to mutualising risk, which is the whole point of the CCP.

The market has long shown a preference for clearing as a risk mitigation tool and I don’t see that changing, whether in a world where Dodd-Frank remains or a world where Choice comes in

Scott Hill, Ice

The alternative would be for policymakers to aim to remove disincentives to members participating in auctions of defaulted member positions. Some market participants and CCPs have said policymakers should relax capital requirements for members taking on a defaulted member’s positions after an auction. Not to do so could even increase systemic risk, says Jackie Mesa, senior vice-president of global policy at the Futures Industry Association.

“If there were a default of one of the clearing members of a CCP, the positions there would be ported to another clearing member and the surviving member would have to take on the capital impact. That becomes very difficult under the current capital structure. Instead, you have to liquidate those positions of that defaulting member and that would create a ripple effect in the market,” says Mesa.

CCP resilience is one of the most complex issues for regulators in the US right now and one of the most urgent as cleared markets are here to stay. As Hill says: “The market has long shown a preference for clearing as a risk mitigation tool and I don’t see that changing, whether in a world where Dodd-Frank remains or a world where Choice comes in.”

It is important then that regulators continue to endorse principles to give CCPs maximum flexibility to tackle distress and return to a matched book, he says.

“We encourage regulators to focus on the principle, not the prescription. None of us know what the next default is going to look like, so the less prescriptive [the regulation], the better the chances that clearing houses will withstand even the most difficult default,” says Hill.

On this issue also, CCPs already appear to be in tune with the thinking of regulators. Steigerwald told the CFTC hearing that the recovery and resolution process should remain in the hands of market participants as far as possible.

“Whether to undertake the additional steps necessary to restore the matched book, or surrender the value embedded in those positions and tear up the whole venture, seems to me a decision necessarily taken by the primary stakeholders in clearing and policy should not interfere with the ability of the clearing community to make that decision,” he said.

With such alignment between CCPs and prudential regulators, a viable policy for CCP recovery and resolution in the US should be within reach. However, Crapo and his Senate colleagues will first need to overcome the simmering political tensions over Dodd-Frank to arrive at a reasonable and practical solution.

Treasury using network theory to combat cyber threats

By Steve Marlin | News | 21 July 2017

Targeted attacks and random threats call for different defences, financial research unit finds

 

The US Treasury is applying network theory to help model defensive strategies against cyber attacks on the financial system.

Network theory is the study of complex interacting systems, such as ecological food chains or social structures. It is also increasingly used to model the links between complex financial institutions, where it can help evaluate interconnectedness within and across the financial sector, for instance to determine where a cyber attack could wreak the most havoc.

The Office of Financial Research (OFR), a department of the Treasury, is building maps that highlight this interconnectedness between nodes within a network.

“Node analysis allows us to identify the most significant and critical entities across financial markets,” said Simpson Zhang, a researcher at the OFR, at a meeting of the OFR’s financial research and advisory committee in New York on July 20. “This information is important in informing us about which institutions to focus on for regulation and protection.”

To illustrate, Zhang gave the example of a network containing three hubs, such as financial market utilities, each connected to a number of nodes. The hub that contains the largest number of links to other nodes within the network would be the most important to protect against a random attack, because if that was to become unavailable, it would create the greatest disruption to the network.

Against a targeted attack, however, the hub that has the most direct links to the other hubs in the network, that is, the hub that has to pass through the fewest nodes to get to the other hubs, is the most critical to protect.

“It’s important to consider the specific type of attack when we design our defences and choose where to focus our protection,” said Zhang.

He added: “Against a random attack, [the hub with the most direct links] would be unlikely to be hit, and thus is less necessary to defend. But when dealing with a sophisticated and knowledgeable adversary, it would be attacked if not protected, which elevates its importance.”

The OFR is studying the structure of the financial sector by combining data from regulatory and commercial sources, which will help it in formulating policies for enhancing the stability and resilience of the financial system.

Network statistics such as link density, average degree – a measure of interconnectedness – and clustering enable the OFR to determine how interconnected each market is. More interconnected markets may lead to greater contagion in the event of an attack.

The OFR is also using financial data to compile information on the size of the financial entities at each node – as measured by the volume of trade going through each node – to determine which matter most for the stability of a market. The failure of one large node from an operational or cyber incident might matter less for stability than the failure of one small node, depending on how those nodes are connected to the rest of the system.

In the absence of mature risk models for cyber attacks, banks have mostly been relying on checklists, scenario analysis, tabletop exercises and audits to manage cyber risk.

“I like tabletop exercises, because that prepares the industry for potential events like this, both random and targeted,” said Tom Dunlap, global head of reference data operations at Goldman Sachs, at the OFR meeting.

Bank cyber chiefs at odds over risk models

By Steve Marlin | Features | 20 July 2017

Vast scope of threats makes modelling unfeasible, say practitioners

An old business adage holds that ‘you can’t manage what you can’t measure’. This is especially apposite in risk management. Banks have spent several decades and billions of dollars developing models to estimate potential losses from market and credit risks, but many chief information security officers (CISOs) confess to being unable to measure their exposure to cyber risk with anything like the same degree of accuracy.

“Wouldn’t it be great if CISOs had something like credit risk [modelling], where they could be driving the risk down to a range that was known and acceptable?” says Rich Baich, chief information security officer at Wells Fargo in Charlotte, NC. “If you look at credit risk, there are risk ratings that say ‘this is our risk appetite’. In the information security industry, there’s none of that.”

Operational risk practitioners consistently rank cyber attacks as the top threat to their organisations, but the lack of a reliable model has left some CISOs – whose job it is to protect banks against malicious interventions – feeling as if they are flying blind.

“For three decades, researchers have been trying… [but] I haven’t seen any reliable model,” agrees Jason Witty, CISO at US Bank in Minneapolis, Minnesota.

The most commonly used approach to quantifying cyber risk remains the Factor Analysis of Information Risk (Fair), whose roots go back to a methodology developed in the early 2000s by Jack Jones, a former CISO at Nationwide Insurance and co-founder of vendor RiskLens.

Fair seeks to provide a straightforward map of risk factors and their interrelationships. The approach’s outputs can then be used to inform a quantitative analysis, such as Monte Carlo simulations or a sensitivities-based analysis.

Fair has some respect in the banking industry. “I’ve been using it for about 10 years. It’s probably the most promising of the methodologies out there,” says Evan Wheeler, director of information risk management at MUFG Union Bank in Los Angeles. “[It provides] a decomposition of risks and understanding [of] the relationships between threats, weaknesses and potential impacts in a consistent way that you can model. What I like about it is you can start off simple. You don’t need fancy simulations or distributions to start out with.”

However, critics say Fair relies too heavily on subjective, qualitative inputs, and hence produces “guesses and estimates” rather than actionable outputs.

“All the methodologies today, including Fair, are based on a long list of subjective, qualitative and quantitative inputs, like how much loss you can expect from something being unavailable, or what is the likelihood of a threat actor causing an impact,” says Rohan Amin, managing director of global technology at JP Morgan in New York.

RiskLens’s Jones disagrees with that assessment: “When performing a risk analysis using a model like Fair, the inputs are objective,” he says. “It’s unfortunate that there are still a large number of people who don’t understand that there are reasonable and effective solutions [to modelling cyber risk]. I’ve had numerous conversations with CISOs about Fair. They see the practical value of it.”

Still, there remains considerable disagreement over whether probabilistic risk models can be developed to accurately predict losses from cyber attacks at all. A model that gives a wide distribution of potential losses would imply the threat to a bank from a ransomware attack, for example, could be anywhere from negligible to catastrophic – not a helpful guide for improving the visibility of future losses, some argue.

“In market and credit risk, there’s a clearer unit of measure that involves money. But in the world of cyber security, if you’re trying to get at how much loss you can expect from a particular event, that’s at best a guess,” says Amin. “There are so many different threat scenarios, so many vulnerabilities, and so many types of impact, it’s close to impossible to create a one-to-one map linking cause and effect. You don’t have an established gold standard for risk measurement in cyber security.”

Subjective approaches

In the absence of a quantitative model, banks tend to take more subjective approaches – such as checklists, scenario analysis, tabletop exercises and audits – to manage cyber risk.

JP Morgan spends about $600 million a year on cyber security, says Amin, and works closely with emerging and established technology companies, in addition to employing its own in-house cyber experts to identify and remediate weaknesses that could become targets for cyber attackers. It also works collaboratively with other financial institutions through industry groups such as the Financial Systemic Analysis and Resiliency Center.

Amin says: “The essential debate within the industry is: do you have a structured approach that is based on guesses or estimates, or do you try to put some numbers around things where you actually can? There are methodologies like Fair, which have their shortcomings, which is why you don’t see a lot of adoption.”

RiskLens’s Jones counters that adopting a risk assessment framework, predefined checklists and a set of common practices is a form of implicit risk management, explicitly managing risk requires one or more quantitative risk-based objectives.

Op risk practitioners frequently complain that sufficient data does not exist to make reasonable projections about potential losses from cyber. Jones reports having similar concerns when he was at Nationwide in the early 2000s. That prompted him to work with the firm’s actuaries to develop a method of making projections based on incomplete data. The result was the beginnings of Fair.

“They said ‘what you want is accuracy, and if you don’t have a ton of data, that’s fine’,” he says. “Your results will have a wider distribution, but that’s still informative and better than waving a wet finger in the air.”

Jones gives the example of modelling ransomware. Typically, a bank using Fair to model its exposure in the event of future attacks would define the inputs – including a definition of ransomware as the company sees it, and the assets at risk, such as employee or customer PCs – the threat agents, and its existing controls. It would then apply quantitative methods such as Monte Carlo simulations to Fair’s projections to derive a distribution.

“From a best to worst case and everything in between, Fair says ‘here’s what our annualised loss exposure is; if we make this change in our controls, how much less risk would we have’? Then you have a basis for a cost-benefit analysis, which you can’t do unless you apply these methods,” says Jones.

Today, Fair is administered by the Open Group, a global standards consortium with several hundred member firms. It can be applied to a number of risk categories, not just cyber risk; a bank in Washington State, for example, reports using it to analyse the risk associated with providing banking services to legal marijuana growers.

Rising confidence

The desire to further develop cyber risk models is certainly strong given the increasing frequency and severity of cyber attacks, and confidence is rising that the industry will get there – even if it takes the crystallising of several large loss events to make it happen: “We don’t use risk models, but in the next five years that will change,” says an information security executive at a major bank. But he counters: “Fifteen years ago, there weren’t a lot of people using risk models in trading either.”

While most CISOs agree that adopting a risk-based approach to cyber attacks is desirable as a long-term objective, some stress the importance of a more tactical risk-based approach to information security. Where policy management and governance were once the cornerstones of a cyber security programme, the focus has now shifted to quantifying the potential loss from an attack.

“Although very mature risk-based frameworks don’t exist, there is a degree of modelling in the sense that you need to risk-rank assets for capital reserves, just as you would for other parts of the business,” says Jerry Brady, global chief information security officer at Morgan Stanley in New York. “If you’re trying to account for people with a couple of hundred millions dollars in a budget, you’d better know what risks you’re managing against.”

Others argue such an approach can be ineffectual, however – even dangerous.

“When you go down the path of trying to make it too risk-oriented, you lose the ability for a broad understanding in terms of what you have to do,” says a technology risk executive at a large bank.  “The way I think about it is more about simple questions a senior executive might ask: ‘Do our people practice safe computing?’; ‘Do we develop applications securely?’; ‘Is our infrastructure safe?’ Then I develop key risk indicators that can answer those questions, and which say whether you are within your risk appetite or in excess.”

RiskLens’s Jones counters that such questions themselves imply a reliance on guesswork. In fact, his development of Fair stemmed partly from conversations at board level over how his firm was approaching cyber risk mitigation.

“For someone who decries subjective inputs, the whole notion of relying on expert judgement should be a glaring contradiction,” he argues. “As for debates over inputs with senior executives – those are good things. That’s supposed to happen in a healthy dialogue about things that are important to the executives.”

CISOs argue that another limitation of the Fair model is its lack of scalability; it may be helpful in assessing a bank or an individual business line’s exposure to a given threat over a defined time period, but not its aggregate exposure. “Fair is meant to be applied in a micro sense,” says the senior technology risk executive. “It’s not meant to be a top-down enterprise risk model.”

In other words, an approach such as Fair could be applied with accuracy to specific areas, such as the potential loss that could result if a security patch is not applied to a bank’s operating system in a timely fashion – but not to its overall exposure to cyber losses on an ongoing basis.

“If you have a way of saying the average exploitation window for a particular vulnerability is 30 days, and it takes X number of days to apply patches, then you’d be able to calculate the probability,” says Witty. “You could do that for certain things, but it wouldn’t be all-encompassing. There are too many variables that can dramatically shift the measurements.”

RiskLens’s Jones feels the limited applications of Fair are a function of the compartmentalised way banks often approach op risk management: “Fair can be used at any level of abstraction. Unfortunately, our profession tends to gravitate to the weeds, so very often the more strategic application of Fair is overlooked.”

A broader probabilistic model for estimating cyber exposure may be some way off, but defenders of risk-based approaches say the techniques can be applied to cyber risk as well as they have for other fields.

“There are opportunities for us to take security forward by building on Bayesian probability analysis,” says Brian Barnier, principal at consultancy ValueBridge Advisors. “CISOs have the opportunity to grow their influence by expanding their analysis to the broader IT environment, where sources of security risks lurk.”

Instead, CISOs tend to approach their task with “recycled audit checklists”, in mind, Barnier says, creating a disconnect. 

“Most controls aren’t dynamic, controls structurally can’t respond to the real world. It’s like the difference between checking tires on an airplane and dealing with an in-flight emergency.”

Op risk capital fight a limp political thriller

By Louie Woodall | Opinion | 11 July 2017

Battle to replace AMA with non-models approach was beset by nationalistic squabbles

Like a late-term politician, the revised standardised measurement approach (SMA) for op risk capital could find itself in office, but not in power, once it is finally implemented.

The debate on how to set Pillar I capital requirements began in late 2015, when the Basel Committee said the advanced measurement approach (AMA) – intended to be the industry’s gold standard when it was introduced under Basel II – had not met its expectations.

With the committee’s member states split roughly evenly between proponents and detractors of the AMA, the search for a suitable replacement was never going to be easy.

It is understood the French regulator, for instance, was a strong proponent of the AMA, and the Germans were also keen on the approach, as evidenced by their efforts to push their bigger banks onto the methodology. Experts at the Australian Prudential Regulation Authority, meanwhile, have already said they want to keep the “good parts” of the AMA, and will not change their expectations for banks’ op risk capital levels.

“We were not against AMA. We did think the best way to capture risk, which is so difficult to get consensus on, was to allow for modelling. We were consistent in not favouring a standardised method,” grouses a source at one regulator. 

Yet despite widespread opposition, the SMA was unveiled in March 2016, with the promise that it would provide a stable, comparable, and risk-sensitive capital measure.

Others perceived different motives behind the committee’s advocacy of a new standardised approach. US banks openly criticised the AMA, which they say afforded too much flexibility to national regulators, and left many with far higher capital requirements than their European peers. 

“The way the AMA guidelines have [been implemented in the US]… produces a number that is much more of a tax than a true measure of risk,” says one US bank’s head of op risk measurement.

It’s unsurprising, therefore, that some see a US influence behind the SMA’s ascendancy. A team of Federal Reserve economists were behind the SMA’s initial calibration. When an industry study found the capital increase implied by the new measurement would be many times larger for European than US banks, mutterings about a US-led fait accompli grew louder. 

“The Americans were very keen on removing AMA – they were for removing models,” says the regulatory source.

Mindful that European objections to the SMA could yet jeopardise the so-called Basel IV package of reforms, stakeholders negotiated a series of fixes – some of which were detailed in a leaked briefing paper from Basel chair Stefan Ingves dated May 19 – to dial down the implied capital increase.

Some commentators see political expediency behind the revised quantitative parameters, however: “All the recent SMA calibrations since the start of the year have been political negotiations. They started recalibrating over the past few months by negotiating coefficients that will reverse-engineer the numbers they already have,” says Evan Sekeris, a partner at Oliver Wyman. 

How the final SMA calibration will shake out is anybody’s guess; a number of insiders relate that further changes are on the way. In an interesting twist, one of the architects of the original SMA, Marco Migueis, a senior Fed economist, has issued a proposal for an alternative forward-looking and incentive-compatible approach, which proponents suggest could form the basis of a US-specific Pillar II capital overlay to the op risk capital framework.

It has yet to win many fans, however – particularly among Europeans. “Frankly, Mr Migueis and his Fed colleagues did not inspire confidence in their initial attempt to ‘improve’ op risk via the SMA, which seemed like a crude and poorly thought-out way to measure capital requirement,” says the head of op risk controls at a European bank.

Even now, there are growing expectations that national regulators will be given the flexibility to ignore or amend parts of the framework. After 18 months of politicking, the industry could now be left with a lame-duck capital methodology.

The Basel Committee declined to comment.

U-turn on SMA comparability sparks anger

By Louie Woodall | Feature | 10 July 2017

Three regulators echo bank dismay as key principle of op risk capital framework is abandoned

This article is the second in a series focusing on proposed reforms of the operational risk capital framework; the first can be found here.

Simplicity, comparability, risk sensitivity. The Basel Committee on Banking Supervision identified these three principles as the foundations of its standardised measurement approach (SMA) for operational risk capital when it was first unveiled in March 2016. Yet fast forward a year, and all three appear to have been jettisoned in the pursuit of a compromise that, while it may help end the deadlock over the Basel III reform package, could also impose a flawed calculation methodology on the industry.

At best, critics say a watered-down SMA – the possible outlines of which were revealed in a leaked Basel document in May – will act as a backstop to national supervisors’ own more risk-sensitive Pillar II capital assessments, say practitioners. At worst, it could lead to wildly divergent capital requirements across and within jurisdictions, weaken the practice of using past op risk losses to inform required capital ratios, and leave firms unguarded against future op risk threats.

With the industry rapidly losing faith in the approach, three regulators Risk.net spoke to for this article say they are refocusing their attention on capturing institution-specific risks through the Pillar II capital framework – the add-ons supervisors use to cover perceived shortfalls in a bank’s Pillar I capital requirement.

“We have no confidence in the fact the SMA is a good measure of risk,” says a source at a European regulator. “Our preferred method would be to take the SMA calculated as a floor and then [calculate] the Pillar II methodology that will be common to all European banks, ruled by the European Central bank (ECB) and its single supervisory mechanism (SSM),” he adds.

Practitioners aren’t thrilled by this idea, however: “The SMA is not the solution by any stretch of the imagination. There is no way this approach can realistically capture a bank’s operational risk profile. Focus will shift instead to Pillar II, but the problem there is you don’t have any guidance that enables jurisdictions across the globe to apply a Pillar II standard uniformly. That creates scope for regulatory arbitrage,” says the head of operational risk capital at a UK bank.

Many suggest those jurisdictions that were once proponents of the now-junked advanced management approach (AMA) – including Australia, Canada, Germany, the UK and US – are likely to urge banks to repurpose the resources devoted to op risk modelling to assist with the calculation of Pillar II add-ons.

Practitioners say the comparability and robustness of op risk capital will largely depend on how standard-setters choose to apply these Pillar II overlays. As different jurisdictions choose their own paths, the dangers of exacerbating an already unlevel playing field become greater.  

“There is a vast divergence in supervisory practice at the moment. In Europe, the ECB will be harmonising a lot of the big banks’ capital requirements. However, the UK will probably continue to do its own thing, and capital there is higher than for many of the countries on the continent. Then there’s the US – where the SMA will probably result in a capital reduction; and considering the direction of the new administration’s policies, there may also be a reduction in gold plating,” says Jouni Aaltonen, a director in the prudential regulation division at the Association for Financial Markets in Europe (Afme).

In Europe, the ground has been laid for regulators to expand their use of pillar II add-ons. The ECB – which directly supervises 125 European Union banks through the SSM – last year introduced Pillar II guidance as a tool in its supervisory review and evaluation process. A Pillar II capital requirement, covering risks underestimated or not covered by Pillar I, already exists, but the guidance is intended to inform all banks under ECB supervision of the “adequate” level of capital that should be maintained.

“The inherent mission of the ECB SSM is to harmonise Pillar II, which varies among the different members of the eurozone. It’s absolutely needed, independently of what happens in Basel,” says the source at the European regulator.

The route US prudential regulators will take, in contrast, remains unclear. The Financial Choice Act, a legislative package aimed at rolling back elements of the Dodd-Frank Act and passed by the House of Representatives on June 8, includes a section that would prohibit federal regulators from establishing an op risk capital standard unless it is “based on the risks posed by a banking organization’s current activities and businesses” and is determined under a “forward-looking assessment of potential losses”.

Earlier this year, Marco Migueis, a senior economist at the US Federal Reserve, and one of the architects of the original SMA proposal, published an alternative op risk capital framework, dubbed the forward-looking and incentive-compatible approach (FIA), which has been floated as a possible US-specific Pillar II overlay. So far, though, it’s been given short shrift from operational risk practitioners.

The leaked document from May detailing the revised SMA’s calibration indicates that including the historical loss component will no longer be mandatory, but included at national regulators’ discretion. Some argue that, at a stroke, this undermines the very principle of comparability that was meant to be at the heart of the SMA.

“It’s an odd message to give when you say the SMA is meant to be about comparability and the first thing you do is give regulators the option to change some key parameters in the model that no longer make it comparable,” says the head of enterprise and operational risk management at a large European bank.

However, others argue that liberating historical loss calibrations from the straightjacket of the SMA could allow regulators to better tailor the inclusion of these data points to their banking systems.

Brad Carr, a director in regulatory affairs at the Institute of International Finance, says regulators could, for instance, include loss history but apply caps and floors to the amount of each event to prevent one big loss, or dozens of very small losses, overwhelming the calculation. Alternatively, the influence each loss event has on the loss component calculation could degrade over time, and slowly roll out of the calculation rather than stay in full over the 10-year period, he suggests.

A source at a second European regulator sees this kind of flexibility as a worrying development. “I think it’s right there’s growing support for the forward-looking element, but that doesn’t mean we should scrap [the backward-looking element]. It’s like if you are training to be a military general today, you wouldn’t scrap reading Sun Tzu’s The Art of War,” he says.

A gradual unwinding

How did it come to this? The first iteration of the SMA was unveiled in March 2016. Intended as a replacement to the AMA – which had aimed to set the gold standard for Pillar I op risk capital for international banks when it was first introduced in Basel II – the new regime’s aim was to provide a non-model based method and sweep away the confusion of modelled approaches spawned under the AMA.

The source at the first European regulator offers his take on how the SMA unwound: “The idea was to remove AMA. Then we said ‘we have to replace the standard approach with something that is more risk-sensitive’. There was no agreement at all on what was sufficiently risk-sensitive, so the only possible way to find a compromise was to allow some jurisdictions – that thought it was not appropriate to capture risk sensitivity – to revert to an even more basic measure.”

The original SMA formula consisted of two elements: a simple financial statement proxy of op risk exposure, called the business indicator (BI), and bank-specific historical loss data – the loss component. The BI was supposed to anchor the capital output to a firm’s likely future exposure to risk, therefore enabling regulators to effectively compare the risk profiles of the banks under their care. BI coefficients, or multipliers, would be applied calibrated to a bank’s size, and thus their supposed susceptibility to future losses.

The loss component, meanwhile, was supposed to enhance the SMA’s overall risk sensitivity by tweaking the BI output to each bank’s overall loss experience. Average total losses per bank were derived using 10 years of past loss data and the amount used to nudge the total SMA capital number up or down according to a mathematical function.

Basel’s SMA proposal was still cooling on the presses when the criticisms began. Foremost among practitioners’ – and some regulators’ – concerns were the weighty capital increases implied by the new methodology, and the manner in which this uplift was spread unevenly between jurisdictions (see box: SMA: in numbers).

Besides the disproportionate impacts across the globe, dealers were also vexed by the approach’s reliance on historical loss data and the manner in which it was incorporated in the calculation formula. Jamie Dimon and Peter Sands are two of the more high-profile banking executives to disparage op risk capital methodologies that rely on backward-looking measures.  

The publication of the initial SMA framework in March 2016 triggered a round of finger-pointing, haggling, and backbiting among national regulators at the Basel Committee level. Those charged with supervising banks likely to incur significantly increased capital requirements under the approach were the most vociferous objectors. Some argue this is a fundamental shortcoming of standardised approaches: there are winners and losers, and the losers will always kick up a fuss.

“If you take a hammer and try and do brain surgery, with the best will in the world, it’s unlikely you are going to get a good outcome,” says the source at the second European regulator, when asked about the variability in SMA outputs.

Keen to win agreement on the SMA to get the full package of Basel III regulatory reforms signed off however, the leaked document from May suggests the committee is considering aggressively stripping back the approach. The headline proposals were: offering national regulators broad scope to tweak the underlying methodology at their discretion; shrinking the multipliers applied to the BI component; and introducing generous transitional provisions to ameliorate the pain of any capital increases.

This compromise has attracted fresh criticisms from those who argue the Basel Committee has swung too far towards simplicity in its revision, however, and in doing so essentially passed the problem of how to calibrate a truly risk-sensitive, comparable op risk framework back to national competent authorities. It’s also taken flak for neglecting any consideration of rapidly evolving op risk threats, such as cyber risk.

“The SMA is so heavily geared to what’s happened in the past. If you’ve done something horrendous, like a bad wart it stays with you for 10 years. Conceptually, that doesn’t feel right. You might have done something horrific but cleaned your act up. On the other hand, you may have suffered no cyber losses, ever, but get hit by one tomorrow that is a catastrophe. You’ve got to get the right balance,” says the UK-based head of op risk at an Asian bank.

The revised SMA detailed in the May 2017 leak exchanges the five BI buckets featured in the March iteration for three and applies new, lower multipliers to each. The new calibration, in fact, is similar to those used in one of the fallback approaches for calculating op risk capital under Basel II, known as the standardised approach (TSA) (see box: The old guard).

The SMA’s BI component, for instance, mirrors the multipliers under TSA, which were set at 12%, 15% and 18%, with the higher multipliers applied to the business lines deemed the most susceptible to op risk.

That may make life simpler for supervisors or banks still following the TSA, but it may not do much for forward-looking estimates of risk, practitioners note. “What’s amazing is those TSA multipliers were guesstimates made 10 years ago, so we are using old calibrations not rooted in an analysis of a decade of op risk data,” says Evan Sekeris, a partner at Oliver Wyman.

Unsurprisingly, the lower multipliers and consolidation of BI buckets have the effect of lowering the amount of capital generated by the SMA. The Operational Risk data eXchange Association (ORX) fed the same data used for its May 2016 survey of the initial SMA through the revised approach in June this year, applying the new BI buckets and multipliers, but assuming the same calibration of the loss component as that released last March. Compared with the previous calibration, the capital numbers tumbled.

Now, the median European bank would experience a 35% uplift versus the status quo, whereas the median US bank would actually record a drop in capital requirements of 17%. Luke Carrivick, head of analytics and research at ORX, warns these numbers should be taken with a pinch of salt, however, as it remains to be seen whether Basel will tinker with the loss component before finalising the package.

Some feel it’s as though the past 10 years had never happened. “I have advocated the replacement of BIA and TSA with SMA – which is superior on paper to the current simpler approaches – and retention of a revamped AMA, combined with a much stronger role for Pillar II to incentivise more sophisticated analytics. The watering down of SMA is a real disappointment if true and I believe even undermines the value of the change for those who currently use BIA or TSA – I suspect it is only to save face for the policymakers after spending so much time and investment in SMA that they are continuing with it. A more sensible approach now – given the watering down of SMA – would be to just retain BIA and TSA, with a new higher calibration, and save regulators and firms the huge compliance costs of implementing the new regime,” says Jimi Hinchliffe, an independent operational risk consultant.

Others argue it’s worse than that, pointing out that the TSA applied multipliers to business lines based on their assumed riskiness, whereas the revised SMA would apply the multipliers based on size alone.

“With the shift from the TSA to the SMA, the thinking has gone from ‘op risk capital is driven by your business models and what markets you are in’, to ‘your op risk profile is driven by your size – the bigger you are the proportionately more risky you are’,” says Carrivick.

Some practitioners, at least, have no intentions of slowing down their op risk modelling programs. “We are carrying on as normal, cranking the handle of our framework, looking at what our profile is, where we are most exposed, what we need to do to mitigate this stuff, how do we report it, and then we see how to calculate the number. We are not looking at the moment at whether we need to change the way we do things because the SMA is going to come – because for us the capital number is always a by-product, not a goal in itself,” says a UK-based head of operational risk at an Asian bank.

The Basel Committee declined to comment for this article.

SMA: in numbers

In May 2016, the Operational Riskdata eXchange Association (ORX) published a quantitative analysis of the first iteration of the SMA that revealed in stark terms how it would produce wildly different capital impacts across jurisdictions, despite its promise to produce stable, comparable figures.

ORX crunched data from 54 internationally active banks, and found the capital increase for the European median bank would be a staggering 63.5%; for the US median it would be 2.9%.

They weren’t the only ones to find the SMA was falling far short of its promises on comparability. The Institute of International Finance (IIF) and the International Swaps and Derivatives Association also conducted an impact study, encompassing 45 banks. While that study hasn’t been released publicly, the IIF acknowledged it found the SMA would hike op risk capital requirements substantially for several European banks versus the AMA, and also lead to some large capital spikes for a number of Asia-Pacific and Latin American banks.

The old guard

Basel II inaugurated three different measures of operation risk capital in 2007. In order of escalating complexity they were: the basic indicator approach (BIA), the standardised approach (TSA) and advanced measurement approach (AMA).

Under the BIA, op risk capital is calculated as a percentage of gross income (GI), a simplified measure of a bank’s profit and loss.

The TSA was a second simplified means of generating Pillar I capital for banks a little more advanced than their BIA cousins. Here, dealers sort their GI into eight business line categories. Then so-called beta factors – or multipliers – are applied to the income amounts in each category and summed together to find the total buffer amount.

Finally, the ill-fated AMA allowed banks to use internal models to calculate their op risk capital requirements. Dealers require prior supervisory approval to use the approach and face rigorous, ongoing assessment of their risk management frameworks as part of the deal.

The SMA was intended to sweep away this old guard and establish a simple, yet risk-sensitive, measurement. However, in its revised form, Evan Sekeris, partner at Oliver Wyman, says it has instead taken the discipline of op risk capital back a step. “When Basel II was written, the whole purpose of the exercise was to push banks as much as possible to the AMA,” he says. “There was an understanding and agreement that TSA was a rough backstop that would help people get started, but ultimately we wanted people to move to the more risks-sensitive AMA. Now we have gone full circle – we started with the SMA and have ended up with another TSA.”

 

 

 

 

Op risk managers not sold on SMA alternative

By Steve Marlin | Feature | 29 June 2017

Proposed forward-looking approach would permit internal modelling, but penalise banks if losses exceed estimates

The Basel Committee on Banking Supervision’s proposed standardised measurement approach (SMA) to calculating operational risk capital requirements has faced a storm of criticism from practitioners, who have decried it as a “messy compromise” that is “not fit for purpose”.

Even regulators appear to be having second thoughts about the package, which would curb the use of bespoke models to estimate losses and result in higher capital charges – especially for European banks. The Basel Committee failed to sign off on the SMA at a crunch meeting in March, and the standard-setter is now debating whether to give national regulators the freedom to let banks ignore past losses – a move that may alienate US regulators, who have historically favoured stricter capital rules.

So the industry might be expected to rally behind an alternative framework – dubbed the forward-looking and incentive-compatible approach (FIA) – being floated by Marco Migueis, a senior economist at the US Federal Reserve, and one of the architects of the original SMA proposal released in early 2016.

The FIA elicits a mixed response from operational risk managers, however. Many welcome the effort to incorporate op risk modelling into the framework: “I like the idea,” says the head of operational risk at a UK bank. “It is a concept which challenges the existing ‘blunt’ methodologies.” But few are on board with the design and calibration of the framework.

Migueis presented the FIA at the OpRisk North America conference in New York on June 20. The approach would see 50% of a bank’s capital requirement calculated using a regulatory formula similar to that employed by the SMA, with the remainder based on internal loss estimates.

While banks will be allowed to determine the confidence level of risk for their models, the FIA introduces a mechanism designed to incentivise accurate estimation of future losses: if a bank’s internal projections undershoot realised losses for given year, its capital requirements for the following year would be increased by a set multiplier.

“Banks would be allowed to estimate a portion of their capital requirement, but they would be given an incentive to have accurate estimation,” Migueis said.

Marco Migueis, architect of the proposed FIA

Migueis claims the FIA is an improvement on both the SMA and the existing AMA, or advanced measurement approach, which requires banks to estimate future operational risk losses to a 99.9% confidence level.

“For prudence reasons, it still makes sense for a part of the framework to be set by supervisors so there is a minimum degree of conservatism,” he said. “Another part of the capital requirement would be a bank’s projection of future losses multiplied by a safety factor. To guarantee that, we have incentive-compatibility: if banks underestimate losses, their capital would be adjusted up.”

US regulators could fall back on the FIA if they are uncomfortable with the final calibration of the SMA, Migueis suggested.

Some banks are hoping they don’t. For starters, practitioners say it would be overly punitive to hike capital requirements for banks that underestimate their losses for a single year, while others are opposed to any methodology that includes an SMA component.

“As it is presented, the proposal is not endorsable,” says the head of operational risk at a European bank with significant operations in the US. “Although the idea of penalising estimation errors is in principle a good idea, using a yearly realised estimation loss as the sole ingredient is just wrong.”

“Overall, it does not get my vote,” says the head of operational risk at a second European bank when asked about the FIA. “If the methodology is 50% SMA-like, then 50% is not acceptable – just like the SMA.”

Paying a heavy penalty

The main complaint about the FIA relates to the penalty for estimation errors, which operational risk managers say is calibrated over too short a time period and results in double-counting, where large and unexpected losses are reflected in add-ons to both the SMA and FIA components the following year. “The last thing a bank confronted with a massive and life-threatening legal loss wants would be to suffer from yet more capital requirements because it underestimated the loss at the previous reporting date,” says the head of regulatory capital at a third large European bank.

The FIA also puts banks in the difficult position of having to determine the optimal capital level for their institutions – which could be lower than the 99.9% confidence level required under the AMA and as low as 50%. “Nowhere is it defined, so we don’t know what we are benchmarking the capital against,” says Evan Sekeris, a partner in the US financial services practice of Oliver Wyman and former Fed supervisor. “Indirectly, capital is defined as the expected loss – since that is what we test against – but operational risk is fat-tailed and you could be exposed to unexpected very large ones.”

The operational risk head at the first European bank echoes those concerns. “The loss estimate, as it is presented, is a quantile of the annual loss distribution, and that might be the median,” he says. “If it’s the median, then even in the case I’m free of biases and estimation errors, 50% of the time my estimate will be higher or lower than realised losses. Hence, in the FIA I’m penalised for bad luck. A much longer time horizon and a measure that is meant to identify biases should instead be used to drive the penalisation.”

Leaving it to banks to determine the acceptable failure rate – coupled with the fact that some banks will be penalised in certain years while others are not – is also likely to result in more variability in capital levels. Some banks may err on the side of caution and pad their loss estimates to ensure they cover a wide range of outcomes, while others might be more aggressive. “The fear of underestimating may lead to holding too much capital,” says the head of operational risk at the UK bank.

The counterargument Migueis presents in his paper is that the current AMA requires a 99.9% confidence level, which is so extreme that banks have had to devise complex models and a variety of estimation approaches, some requiring advanced or non-standard mathematical techniques – resulting in projections that are of limited use for pricing risk or day-to-day risk management. The FIA allows banks to project losses using models that are as simple or complex as they choose – although removing the 99.9% confidence requirement is likely to result in simpler models that are more accessible to non-expert bankers and regulators.

Migueis also says the FIA could be modified to require banks to project two or three years of losses, instead of one year, with projected losses and the resulting capital requirements averaged out over the estimation period. The downside of this is that loss projections beyond one year are less accurate – so if losses are projected to be very high this year and very low the following year, it may not be prudent to lower this year’s capital requirements because projected losses for the next year are lower.

Floor flaws

The other main gripe industry participants have with the FIA framework is that 50% of a bank’s operational risk capital requirement will still be calculated using the SMA, or a similar formula.

“Another reason to reject the FIA proposal would be the proposed level of calibration, in which a standard method, either the SMA or the BIA [basic indicator approach], would de facto result in a floor for the capital requirement,” says the head of operational risk at the first European bank. “This is unacceptable given the high level of difference that is structural across banks and jurisdictions.” The BIA – the simplest approach to calculating op risk capital under the existing Basel rules – requires banks to hold capital equivalent to a fixed percentage of their annual gross income.

The idea of using the SMA as a floor for calculating operational risk capital also features in a proposal by another Fed economist, Filippo Curti of the Federal Reserve Bank of Richmond. Under that approach, banks could continue to use internal models to calculate Pillar 1 risk-weighted assets (RWAs) in conjunction with the SMA, using calculations derived from the latter as a floor. However, the internal models would only be required to estimate losses to a 90% or 95% confidence interval, rather than the 99.9% standard for the AMA. Regulators would then set a multiplier – based on loss data from across the industry – that banks would apply to their estimates to arrive at minimum RWAs.

Any approach to calculating operational risk capital adopted by US regulators will need to be floored by an internationally agreed framework to comply with the Basel accords, according to Migueis. If the Basel Committee decides to replace the existing operational risk approaches with the SMA, it would become the floor; if not, regulators would revert to the BIA.

“If the SMA is finalised, then some form of the SMA would need to be the floor for this approach,” Migueis said at the OpRisk North America conference on June 20. “If Basel III never got finalised, then the BIA could be used as a floor.”

Both the genesis and the future of the FIA are inexorably linked to the SMA. The original SMA proposal released in 2016 required banks to calculate a loss component – an average of bank-specific operational risk losses over the previous 10 years – and plug this measurement into a regulatory-defined formula to produce the ultimate SMA charge. The loss component was intended to enhance the framework’s risk sensitivity and provide incentives for banks to improve their op risk management.

However, under a revised version of the SMA that is currently under consideration, national regulators would be allowed to exclude the loss component from the calculation and with it a link to a bank’s operational risk history. Instead, the revised SMA would simply divide firms into three different size buckets and apply a separate multiplier to each to produce the capital charge.

Migueis admits the current impasse over the SMA – coupled with the desire among US regulators to maintain higher capital levels and uncertainty over the provisions of the Financial Choice Act – prompted him to propose an alternative in the form of the FIA, which he claims will increase risk sensitivity while retaining conservatism and minimising gaming.

“The Financial Choice Act explicitly mentions operational risk and says that the operational risk capital requirements should be forward-looking and not fully reliant on past losses,” Migueis said at the OpRisk North America conference. “One could argue whether the SMA would comply with this requirement.”

The final calibration of the SMA and the question of whether it raises or lowers capital requirements for operational risk will determine how receptive US regulators and banks are to the FIA or any other alternative.

According to a study conducted by the Operational Riskdata eXchange Association (ORX) in 2016, the original SMA proposal would have resulted in a mean capital rise of 63.5% for European banks, while US banks would see their capital requirements increase by just 1.3% on average.

Migueis notes in his paper that modifying the SMA to make it close to capital neutral for European banks would likely result in meaningfully lower operational risk capital requirements for US banks – an outcome that may not be palatable for US regulators.

Away with RWA

Another alternative to the SMA, proposed by Peter Sands, the former CEO of Standard Chartered and currently an adjunct professor at Harvard, would do away entirely with the concept of operational RWAs.

He advocates replacing the operational RWA approach with a separate capital buffer that would be calculated based on three components: a scale-based factor, similar to the business indicator component of SMA; a modelled component for frequently occurring, easily modellable loss events; and a third component for infrequent, high loss events that would be determined by the bank’s prudential regulator based on its assessment of the risks facing the bank, and the quality of its operational risk management capabilities.

“I share his criticisms of the SMA and welcome the fact that he has offered a forward-looking, incentive-compatible alternative,” Sands says of Migueis’ proposal. “However, he doesn’t question the validity of operational RWA, only offering a new way of determining how much RWA to hold.”

Asked if the FIA could be used to determine the regulatory component of the operational risk buffer in his proposal, Sands says it does not go far enough to satisfy regulators’ concerns about the safety and soundness of the banking system. “One problem here is the underlying assumption that minimising the loss suffered by the bank is the objective, whereas the regulatory objective should be to minimise the externalities,” says Sands. “With operational risk events, the two can be quite different.”

Speaking at the OpRisk North America conference on June 20, Migueis admitted the chances of the FIA being adopted are up in the air. “It depends on what the final SMA looks like. It depends on whether regulators are comfortable leaving the final SMA as is, and it depends on whether industry commentators believe that the approach has merit,” he said.

Migueis made it clear he intended to persevere with the FIA, however. “Internally and externally, I’m going to keep proposing this,” he said.