New op risk taxonomy set for October debut

By Steve Marlin | News | 20 August 2019

Project is being closely watched by banks and regulators amid frustrations with legacy Basel approach

A new standardised taxonomy for operational risk developed by industry consortium ORX is set to be unveiled in October.

After spending over a year sifting through a vast dataset composed of the taxonomies of more than 60 of its members, ORX has completed a first draft of the new taxonomy, which is currently being reviewed by a member advisory group, says Steve Bishop, head of risk information and insurance at ORX.

The project is not intended to supplant taxonomies at any one financial institution, but rather serve as a reference for firms to benchmark against their peers, and compare what is, by definition, standard practice across the industry.

“We do not see this as a one-off project, more of a starting point,” Bishop says. “Our intention is to repeat the work over a number of years. This will allow us to update the reference taxonomy, continuing to use a data-driven approach, and monitor whether we see convergence in the industry – potentially reaching an agreed standard in the future. It will also serve to identify new or evolving risks.”

The project – which Mark Cooke, head of op risk at HSBC has called “the most important structural development in operational risk management for 15 years” – is being watched keenly by banks and regulators, with the industry still largely wedded to the Basel Committee’s risk taxonomy, which dates from 2001.

Originally intended as a way to assign operational losses into buckets for the purpose of calculating op risk capital, taxonomies have evolved to serve as a common language for understanding the sources of risk and their business impacts. Most firms have used variants of the legacy Basel approach as a starting point for developing their own taxonomies, which may include more recent risks such as cyber attacks.

“New taxonomies can be useful for emerging risks, where there’s less common understanding, such as cyber and IT risks,” says a senior London-based operational risk executive. 

Over the years, every bank has developed its own taxonomy with its own idiosyncrasies, so ironing them out has been a significant task. The complexity of risk taxonomies has also grown in that time; where the original Basel taxonomy contained seven broad Level 1 risk categories, the ORX taxonomy will have some 16 Level 1 risks, and expects to have between 60 and 80 Level 2 risks, says Bishop.

Those numbers mask a significant disparity in granularity of approach between the firms whose taxonomies ORX surveyed, however, which ranged between 20 and 700 line items, with a median number of 69.

One area of divergence is cyber risk. Some banks treat cyber as a distinct risk, while others view it not as a risk per se but as a vector for other risks, such as external fraud.

New taxonomies can be useful for emerging risks, where there’s less common understanding, such as cyber and IT risks

Senior London-based operational risk executive

Here, banks might need regulators to step in and lay some groundwork. The US Federal Reserve is playing a lead role in defining and measuring cyber risk, with the watchdog formulating a white paper on definitions to be published by the end of this year. Although the Financial Stability Board has issued a lexicon for cyber risk, there is no consensus among financial institutions on terminology.

ORX has hired management consultant Oliver Wyman to help sift through the individual taxonomies and discern common threads.

“Most banks have a Basel taxonomy, but in parallel they have a different tailored taxonomy,” says Evan Sekeris, partner at Oliver Wyman. “We are looking through those 60 bank taxonomies and trying to find common denominators. Are there things that everybody looks at the same way? Are there risks where there are divergences?”

ORX’s project substantially superseded a taxonomy project several of its member banks – including Barclays, HSBC and JP Morgan – had been working on together.

While lauding the project’s aims, seasoned op risk executives point out that changing industry practice is hard.

“Everyone wants a common taxonomy as long as it’s theirs,” says Andrew Sheen, an operational risk consultant. “Firms experience this internally as well. They have financial crimes silos and operational risk silos, all of which have developed their own taxonomies.”

Carney: Germany and France risk Brexit derivatives cliff edge

By Christopher Jeffery, Daniel Hinge, Helen Bartholomew | News | 19 August 2019
Mark Carney

BoE governor says it is in EU countries’ “interest” to ensure full viability of financial contracts pre-Brexit

The failure of Germany and France to amend rules related to the treatment of some over-the-counter derivatives contracts ahead of the UK’s exit from the European Union could cause unnecessary stress to the European financial system, according to Mark Carney, governor of the Bank of England. Carney calls on European lawmakers to address the matter before October 31.

The UK central bank – which has micro- and macroprudential oversight of the UK financial system, as well as resolution responsibilities for banks, insurers and financial market infrastructures – has warned for some time of the need to tackle Brexit transition risks associated with multi-trillion dollar, OTC derivatives contracts. The UK’s new prime minister, Boris Johnson, has repeatedly stated his commitment to the UK leaving the EU with or without a trade deal by October 31.

During a wide-ranging interview with Risk.net’s sister publication Central Banking, Carney says the EU authorities have already tackled one major danger, related to cleared derivatives – at least in the immediate future. “The big thing that has been resolved is for cleared derivatives contracts,” says Carney. “The European authorities have taken measures for temporary permissions for large financial market infrastructures, which is hugely important.”

EU institutions were given temporary access to London-based derivatives clearing services such as LCH, Ice Clear Europe and LME Clear until March 30, 2020.

Problems remain, however, with bilateral, non-cleared derivatives.

Lawyers say there’s no question about the legal enforceability of bilateral, uncleared financial contracts post-Brexit. But there is a problem related to the lack of recognition among some European jurisdictions – notably in the two largest eurozone economies – of so-called ‘lifecycle’ events.

Lifecycle events include amending the terms of trades as well as the ability to compress or cancel derivatives against similar positions, which are very common practices in the derivatives industry. Compression of trades is viewed as an important post-financial crisis technique for reducing operational risks related to derivatives positions.

“For bilateral, uncleared contracts, national legislation has taken care of this in some jurisdictions in Europe, but in a number of the big ones – Germany and France are the most obvious – they have not addressed it,” says Carney. “The consequence of that is there is some risk.”

A lawyer at one bank with a large derivatives operation in London says he is also concerned about the situation highlighted by Carney. He says there have been few signs of an effort to address discrepancies in Germany and France, although Nordic countries, Benelux nations and Italy have all taken steps to address the issues.

“We have a difference of opinion with the European authorities about the seriousness of this risk,” says Carney, adding it is “in Europe’s interest and also ours residually” to address this risk “more clearly”.

Carney questioned the wisdom of increasing operational risks in the event of a no-deal Brexit, and reiterated that the authorities in Germany and France may not have fully grasped the scale of the issue.

“We know some of the largest institutions in London, they perform tens if not hundreds of thousands of these lifecycle events in total on a weekly basis,” says Carney. “So having legal uncertainty or an inability to perform lifecycle events when lots of other things are going on and there is lots of volatility – which would happen with a no-deal Brexit in our view – is not sensible. You can debate the scale of the risks, but it is just not a sensible risk to take.”

Some European officials, notably those in France, would like to see legacy (and new) bilateral, non-cleared derivatives held between UK and EU counterparties moved from the UK to Europe via a process called novation. “That approach in France [on life events] is combined with an approach to encourage moves and make it easier to move,” says the lawyer at a large bank in London. “So I wouldn’t anticipate France providing any relief. It’s not really aligned with their priority of encouraging clients away from London.”

The authorities in Germany, meanwhile, do not appear to have engaged with market participants even to discuss the issue. “Germany has simply enabled the creation of a regime,” says the bank derivatives lawyer. “They’ve given the regulator the authority to provide relief but, in fact, that ability hasn’t been triggered.”

Asked if he expected European authorities to tackle the issue of derivatives lifecycle risks by the Brexit deadline of October 31, Carney replies: “It is very much in the interests of France and Germany to make some movement on that.”

This article originally appeared on Risk.net’s sister website, CentralBanking.com.

Uniform? Op risk capital rules go their own ways

By Steve Marlin | Features | 15 August 2019

Europe and Canada set to include historical losses in new standardised approach; Australia probably not

An effort to harmonise op risk capital rules around the world appears instead to be another case of regulation fracturing into national variants right out of the gate.

In 2017, national regulators were given leeway under Basel III to allow banks to ignore the impact of past losses when calculating capital under revisions to the new standardised approach – with the potential to dramatically affect how much capital lenders would be required to hold.

So far, Europe, Australia and Canada appear to be rippling in different directions: Europe and Canada have proposed including historical losses in capital calculations; Australia may be leaving them out.

“It defeats the purpose of the whole exercise,” says Evan Sekeris, a partner in the financial services practice at Oliver Wyman in Washington, DC. “With this new approach, which feigns convergence, all of a sudden we start having different implementations.”

Other big regulators – the Federal Reserve and the Bank of England – have not yet spoken on what direction they will take. National regulators have until 2022 to put their plans into effect.

The new standardised approach is composed of two core elements: a business indicator component, that ranks firms by revenue and then applies a multiplier to determine a baseline capital requirement; and an internal loss multiplier (ILM), which scales this base number according to the banks’ losses over the previous decade.

Evan Sekeris

But regulators in places with heavy loss histories are said to have demanded the option of excluding them, saying they would weigh too heavily on bank capital. To excise them, the ILM would be set at one – in other words, a wash. Capital requirements would then be based only on the size of a bank’s business.

Some banks with subsidiaries in jurisdictions with conflicting standards fear they will have to comply with the more stringent set of requirements.

“If the fiscal regulator at the top has a different pronouncement from one of the sub-regulators, inevitably you will end up with a capital calculation which is the highest outcome,” says an operational risk executive at a major European bank.

An analysis by the Basel Committee and the European Banking Authority (EBA) showed op risk capital for European banks would increase 44.7% over June 2018 levels once the new standardised approach is adopted. For those banks coming from the advanced approach, op risk capital would rise 40.1%, the analysis found.

The EBA also predicted regulators with discretion to set the ILM would use it – upending one of the key goals in setting aside the advanced measurement approach in favour of the standardised one.

“One could argue that including historical losses is counterintuitive to a forward-looking risk-based approach – but the regulators want consistency on a pan-European basis,” says the op risk executive. “Otherwise, it creates arbitrage opportunities.”

There are other ways regulators could tweak the impact of historical losses, observers point out: according to the final agreed framework, the minimum threshold for including a loss in the 10-year rolling lookback is €20,000 ($22,000). At national regulators’ discretion, this may be raised to €100,000 for mid-sized and large banks.

One could argue that including historical losses is counterintuitive to a forward-looking risk-based approach – but the regulators want consistency on a pan-European basis. Otherwise, it creates arbitrage opportunities

Risk executive at a major European bank

Who’s doing what

Canada’s Office of the Superintendent of Financial Institutions (Osfi) has recommended including past losses in op risk capital calculations because it makes the requirements more risk-sensitive, says a spokesperson. It is still weighing whether to make that inclusion mandatory, though.

Osfi has elected to move up the effective date for its new requirements to 2021, a year ahead of the final deadline, saying the earlier date shouldn’t present any difficulty to Canadian banks because they will not need to build additional models. Osfi is also analysing the impact of the revised Basel III requirements using data submitted by banks.

In Europe, where EU legislators have yet to take up the matter, the EBA this month concurred with Canada on including historical losses, recommending against allowing national regulators to set the ILM to one. To do so, they argued, would result in an increase in op risk weighted assets (RWAs) of less than 20% for the largest banks – essentially giving the largest banks a walk on the losses of the last decade. In contrast, an ILM with a bite would jack up RWAs more than 50%.

The EBA also wants a higher ILM because it would encourage banks to avoid losses that would be branded into their op risk capital for a long while.

Australia, meanwhile, has proposed excluding historical losses – capital would in effect be based entirely on a bank’s size. In a characteristically plain-speaking 2018 consultation paper, the Australian Prudential Regulation Authority (Apra) said including loss history would skew capital calculations, because doing so would mean including losses from businesses a bank had already exited.

That would mean “a significant misalignment between current exposure and capital”, the watchdog noted – in effect, closing the barn door after the horse has run off.

An analysis by the authority found that excluding historical losses would result in a small decrease in op risk capital under the new standardised approach for the largest banks migrating from the current advanced measurement approach, an Apra spokesperson says. Like Canada, Australia has opted for a 2021 effective date.

Apra

Still, Apra can at any moment impose capital add-ons if it thinks a bank’s op risk capital is insufficient. Last year, the authority demanded a $744 million add-on from the Commonwealth Bank of Australia; this year, it required $348 million from each of three other banks: ANZ, the National Australia Bank (NAB) and Westpac. Apra underscored that these cases were based on non-financial aspects of op risk that would not be apparent historical loss data.

In the US, the Fed has given no indication of when it will pronounce on the new standardised approach. Practitioners speculate that, when it does, it might let banks seek permission to exclude some loss events from their histories, perhaps by raising the threshold at which losses are included in a bank’s 10-year window.

“The Fed has a strong desire to coordinate internationally,” says a second op risk consultant. “What they might touch is individual loss data, because other regulators are signalling they might do that. There’s language that gives local jurisdictions the ability to allow the exclusion of specific data points.”

The Fed declined to comment.

The Prudential Regulation Authority (PRA), the rule-making arm of the Bank of England, has not given any hint as to whether it will set the ILM to one, but practitioners speculate the watchdog might do so given its emphasis on the qualitative aspects of operational risk over quantitative.

“The PRA would always want to have some sort of direct capital oversight,” says the first op risk executive. “If you move to a new standardised approach, then Pillar 2 becomes close to obsolete. The idea that the PRA is not against a multiplier of one seems reasonable.”

The Bank of England declined to comment.

Many banks have a knee-jerk reaction: ‘We don’t want the ILM, we want it set to one’. If you believe you have a handle on your losses, then it’s in your interest to keep the ILM in

Evan Sekeris, Oliver Wyman

The promise of absolution

One spanking new feature of the revised standardised approach is the use of a moving 10-year window on losses. The window will be a boon to banks laden with crisis-era losses: by the time the standardised approach is fully phased in, the window will have largely scrubbed those old losses from banks’ op risk calculations. Large banks, particularly those in the US, would benefit since under the advanced model they have had to use loss data stretching even decades back.

“If the standardised approach goes live in 2022 or ’23, then many of these losses would disappear,” says an op risk executive at a second large European bank.

The moving 10-year window would also soften the effect of an ILM of more than one.

“While in the short term, many banks might suffer from the ILM, in the long run they would benefit as the moving window expunges crisis-era data,” Sekeris says. “Many banks have a knee-jerk reaction: ‘We don’t want the ILM, we want it set to one’. If you believe you have a handle on your losses, then it’s in your interest to keep the ILM in.”

Loss history is a major component – though by no means the only one – of op risk RWAs for banks using the current advanced approach. Since each bank uses its own model, and each decides how conservative to be, the amount of RWAs could vary even among banks with very similar loss histories. Also, op risk RWAs as a percentage of the overall total will depend on the composition of a bank’s businesses: a bank with relatively low credit or market risk RWAs could have proportionately higher op risk RWAs.

Another factor is the treatment of losses at businesses a bank has exited. Some banks have placed those RWAs in ‘capital release units’, or ‘bad banks’, and may then ask regulators to disregard them for purposes of op risk capital on grounds the losses have nothing to do with the ongoing business. (See box, ‘A case for clemency?’)  

Still, some insight can be gleaned from looking at op risk RWA percentages. Out of 19 European banks, UBS had the highest percentage of op risk RWAs for 2018, at 29.4%. Rounding out the top were Deutsche Bank with 26.3% and Credit Suisse at 24.9%.

At the bottom of the European scale, Intesa Sanpaolo had op risk RWAs of 6.4%, along with UniCredit at 8%, DZ Bank with 8%, CaixaBank at 9% and Groupe BPCE with 9.7%. The remaining banks hovered between 10% and 14%, with the exception of ABN Amro which came in at 18.1%.

The five large Canadian banks had percentages between 12% and 13%, with the exception of Scotiabank at 11.2%. Of the big Australian banks, ANZ, Westpac and NAB were all below 10%; the Commonwealth Bank of Australia was an outlier at 12.7%.

The Basel accords aimed first and foremost to fortify cash bulwarks at banks after the financial meltdown caught them woefully undercapitalised. Uniformity in capital rules was always a secondary aim. Still, some observers blame the knot of coming national rules on the Basel Committee itself: if it wanted homogeneity, why allow discretion that would predictably torpedo it?

“All the divergence and deviations from a unified capital model come from a serious lack of confidence in what the standardised approach has to offer,” says an operational risk consultant. “The only thing that flexibility does to a model is destroy the credibility of the model.”

A case for clemency?

One way banks can shed risk-weighted assets quickly is by convincing their regulators to let them disregard historical losses they consider no longer relevant when calculating capital – usually, where the losses occurred within a business the bank has subsequently exited.

Under the new standardised approach, national supervisors will retain this discretion; but as the Basel Committee notes: “The exclusion of internal loss events should be rare and supported by strong justification,” adding that when making their decisions, supervisors “will consider whether the cause of the loss event could occur in other areas of the bank’s operations”.

This can get tricky, as regulators may have their own view of things. Losses related to misconduct, for example, could be construed as part of a wider corporate-culture problem, rather than a few bad apples in one particular business line or subsidiary, and therefore still deemed worthy of inclusion in a bank’s capital number.

One bank currently looking for clemency on past losses is Deutsche Bank. The bank is moving €74 billion of RWAs ­­– €36 billion of that representing operational risks – into a ‘capital release unit’, or ‘bad bank’.

The German lender’s hopes for shedding operational risk-weighted assets rest on whether it can convince regulators that it has truly set aside businesses that were bleeding it of cash. Deutsche, which currently uses the advanced measurement approach to model its capital requirements, plans to unload the assets over the next three years, and gradually wind down the bad bank, reducing its op RWA pile to €28 billion by 2021.

But in taking this step, Deutsche Bank is also hoping to wash the losses from its modelling slate.

“In the operational risk world, you can drop the loss history ultimately from your modelled results if you exit a business, in consultation with your regulators,” James von Moltke, the company’s chief financial officer, said on a July 7 call to analysts.

And those regulators are crucial. In order to get a capital reduction, Deutsche has to prove to them that the losses were in the discontinued businesses, and not part of its ongoing operations.

Regulators will also want to see whether Deutsche Bank has truly closed the door on a business, or merely reduced its presence.

“There has to be a full exit of that market or product,” says Andrew Stimpson, an analyst at Bank of America Merrill Lynch. “The question for Deutsche is whether it’s really a full exit or whether the exits are substantial enough to give regulators that assurance that they aren’t just becoming smaller in a product, but are actually exiting it entirely.”

Deutsche Bank will ultimately need to make its case to the European Central Bank, which is likely to be less accommodating than the bank’s home-country regulator in Germany.  

“To the extent the changes in the operational risk model are material, as defined by applicable EU regulatory standards, they have to be approved by the competent authority, ie, the ECB,” says an ECB spokesperson.

Deutsche Bank declined to comment.

Op risk data: Mifid fines hit $140m

By ORX News | Opinion | 9 August 2019

Top five: Deutsche pays €175m to settle derivatives bribery claims. Data by ORX News

Jump to Spotlight: FBI busts hacker | In Focus: Mifid fines

The largest operational risk loss in July is from a familiar source: a fraud at an Indian bank. Punjab National Bank reported a 38.1 billion rupee ($555.6 million) loss from Bhushan Power & Steel, which is accused of manipulating its accounts to raise funds from a consortium of lenders and misappropriating bank funds.

Bhushan’s former chief financial officer and director, Nittin Johari, was arrested by Indian police in May 2019 over alleged fraudulent activities, including filing false documents with various banks, according to media sources.

Punjab National Bank said it expected a “good recovery” of the losses and had already provisioned 19.3 billion rupees in line with regulation.

In April, it was reported that Bhushan Power & Steel had fraudulently diverted 23.5 billion rupees held in loan accounts at Punjab National Bank, Oriental Bank of Commerce, IDBI Bank and UCO Bank to the accounts of various companies and shell companies. It is unclear whether or not that case is related to the 38.1 billion rupee fraud at Punjab National Bank.

The second largest publicly reported loss is a $314.6 million fine for now-collapsed Dubai private equity firm, Abraaj Group. An investigation by Dubai’s financial regulator found that two of the company’s subsidiaries managed assets without proper authorisation between 2007 and 2018, provided misleading financial information, and failed to maintain adequate capital resources, among other failings.

As early as 2009, Abraaj’s compliance function had raised concerns about the group carrying out unauthorised financial services, the regulator found. However, senior management ignored this.

In third place is another loss related to Bhushan Power & Steel. Kolkata-based Allahabad Bank reported a fraud totalling 17.8 billion rupees ($258.9 million) in Bhushan’s account.

Following an audit of the steelmaker’s borrowings at Allahabad, Indian police filed an initial report against Bhushan and its directors. Allahabad has provisioned 9 billion rupees against the loss, and, like Punjab National Bank, it said it expected a “good recovery”.

The fourth largest loss is the €175 million ($197.1 million) that Deutsche Bank agreed to pay Dutch housing co-operation Vestia to settle allegations that the bank bribed Vestia’s treasury and control manager, Marcel de Vries, to enter the company into interest rate swaps that caused it substantial losses.

Last year, de Vries was convicted of bribery in the Netherlands and sentenced to three years in prison. The bank reportedly paid €3.5 million in commissions to Dutch intermediary First in Finance Alternatives, over half of which de Vries received.

In 2012, Vestia suffered €2 billion in losses on derivatives purchased from banks, including Deutsche, as a hedge against rising interest rates. Deutsche did not admit liability.

Finally, US credit card issuer Capital One said it expected to pay up to $150 million after a hacker accessed the personal information of 106 million credit card applicants and customers and shared it online.

Capital One will offer credit monitoring and identity protection to all those affected, but as of July 29 does not believe that the compromised information has been used fraudulently. The firm expects costs of up to $150 million to be largely driven by customer notifications, credit monitoring, technology costs and legal support. Customers have so far filed three lawsuits against the firm.

 

Spotlight: FBI arrests Capital One hacker

Two breaches at a US and a Canadian firm exposed the personal data of millions in the past two months.

On July 29, Capital One disclosed that an external party had gained access to the personal information of its 106 million credit card applicants and customers by exploiting a configuration vulnerability in its infrastructure. The same day, the Federal Bureau of Investigation arrested a person believed to be responsible for the hack. Capital One said the incident could cost it up to $150 million (see above).

According to the FBI, the hacker accessed the data at various times between March 12, 2019 and July 17, 2019, exploiting a vulnerability caused by a misconfigured web application firewall. The hacker was able to decrypt encrypted data, but tokenised data, including account numbers and social security numbers, remained protected. The information was reportedly held on servers rented from Amazon Web Services.

In June, the Canadian credit union group Desjardins revealed that an employee had shared the information of 2.9 million of its members. Desjardins added there had been no spike in fraud cases as a result of the breach.

 

In Focus: Banks rack up big Mifid trade reporting fines

Though Mifid II has drawn much alarm from the banking world, it is Mifid I, which took root in 2008, that has cost them big in fines so far. Over the life of the directive, European regulators have levied penalties of $139 million – more than half of that attributable to just two fines imposed by the UK’s Financial Conduct Authority in March of this year.

On March 18, UBS was fined £27.6 million ($33.5 million) for failing to submit complete and accurate reports for 135.8 million transactions. The FCA determined the failings were caused by 42 separate errors between 2007 and 2017, touching 7.5% of UBS’s reports.

Shortly after, on March 27, Goldman Sachs International was fined £34.3 million for failings in its transaction reporting. The FCA found Goldman had filed inaccurate or late reports from 2007 to 2017, covering products including equity instruments, cash equity products and other securities.

In total, the FCA has fined 14 firms over reporting failures under Mifid I going back to 2009. In contrast, there have been only a handful of fines over €500,000 outside the UK, perhaps reflecting London’s importance as a hub for settling trades.

Firms must also comply with the stringent European Market Infrastructure Regulation, which requires them to report over-the-counter derivatives positions to trade repositories.

A year ago, the European Securities and Markets Authority reported there were three regulatory penalties under Emir in 2017 – fines of €60,000 and €105,000 by Covip, Italy’s pension fund commission, and another of £34.5 million by the FCA in the UK. This last one, announced in October 2017, chastised Merrill Lynch International for failing to report 68.5 million exchange-traded derivative transactions.

Editing by Alex Krohn

All information included in this report and held in ORX News comes from public sources only. It does not include any information from other services run by ORX, and we have not confirmed any of the information shown with any member of ORX.

While ORX endeavours to provide accurate, complete and up-to-date information, ORX makes no representation as to the accuracy, reliability or completeness of this information.

Goldman’s op RWAs fall 8% in Q2

By Alessandro Aimone | Data | 8 August 2019

Operational risk-weighted assets (RWAs) fell 8% at Goldman Sachs in the second quarter of the year, as a series of past op risk events fell out of the internal loss data used to calculate its requirements.

Total op RWAs stood at $107.2 billion at end-June, compared with $116.7 billion the quarter prior and $113.6 billion the same quarter a year ago.

The bank’s total RWAs, calculated under the advanced approach, rose $1.9 billion in Q2 to $558.5 billion, as higher credit RWAs more than offset reductions in op and market RWAs. However, year-on-year, RWAs fell $55.9 billion, or 9%.

Op RWAs currently make up 19.2% of the bank’s RWAs.

What is it?

US banks use the advanced measurement approach (AMA) to quantify their op RWAs and associated capital charges. This approach uses the frequency and severity of past op risk losses to determine how much capital should be put aside to absorb potential future losses.

Each bank’s exposure is modelled using scenarios incorporating several different types of operational failure, as well as internal and external actual loss experience. 

Updates to the loss experience inputs can cause the resulting op RWA amounts to vary dramatically. For example, if a large regulatory fine is incurred during one quarter, it may result in higher reported op RWAs at the end of that reporting period.

Why it matters

Goldman noted that lower operational RWAs reflected “the removal of certain events incorporated within the firm’s risk-based model based on the passage of time”. These likely refer to regulatory fines and other sanctions imposed at the time of the financial crisis, which are now falling out of the backward-looking dataset used as an input to the bank’s AMA model. 

This doesn’t mean, however, that Goldman’s op RWAs will continue to roll down quarter on quarter. Potential op risk losses lurk on the horizon, such as a looming criminal case brought by Malaysian prosecutors in relation to the bank’s involvement in the 1Malaysia Development Berhad scandal.

This could result in fines and fees that would work their way into Goldman’s loss history and counteract the removal of financial crisis-era events.

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights.

Share your thoughts with us. You can drop us a line at alessandro.aimone@risk.net, send a tweet to @aimoneale, or get in touch on LinkedIn

Keep up with the Risk Quantum team by checking @RiskQuantum for the latest updates.

Tell me more

Goldman Sachs builds legal reserves

Goldman, Wells cut operational risk

Has op risk capital peaked for US banks?

View all bank stories

ABN Amro cuts op RWAs by €1bn

By Alessandro Aimone | Data | 7 August 2019

Dutch lender ABN Amro reduced its operational risk-weighted assets (RWAs) by 5% in Q2 following a model update. 

Total op RWAs stood at €18.8 billion ($21 billion) at end-June, down from €19.8 billion the prior quarter. They are now at their lowest level since Q4 2016.

The quarterly reduction contributed to a lower overall RWA total for the bank in Q2, of €106.6 billion, down 1.3% on Q1 but up 2% on the year-ago quarter. Its ratio of Common Equity Tier 1 capital to RWAs stood at 18%, flat on Q1 and down from 18.3% the same quarter a year ago.

Op RWAs make up 17.7% of the bank’s RWAs.  

What is it?

Basel II rules laid out three methods by which banks can calculate their capital requirements for operational risk: the basic indicator approach (BIA); the standardised approach; and the advanced measurement approach (AMA). 

The first two use bank data inputs and regulator-set formulae to generate the required capital, while the AMA allows banks to use their own models to produce the outputs.

ABN Amro uses a mix of the BIA and AMA to generate its op RWA amounts.

Why it matters

ABN Amro cited model updates as the reason its op RWAs slid in Q2. The majority – 96%, in fact – of its op RWAs are calculated using the AMA, so a change to this model would be expected to have a big effect on its total.

Tweaks to model parameters and updates in external and internal loss data influence AMA outputs, but they cut both ways. ABN warned in its quarterly report that its watchdog, the De Nederlandsche Bank, has ordered it to review all its retail clients for possible money-laundering activity, and that fines could be imposed if the supervisor finds its customer due diligence processes fall short of expectations.

These fines would count as op risk losses, and be factored into future projections of the bank's op RWA amounts, likely forcing them higher. Perhaps this quarter’s reduction will prove to be only a temporary respite. 

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights.

What will be ABN Amro’s next move? You can drop us a line at alessandro.aimone@risk.net, send a tweet to @aimoneale, or get in touch on LinkedIn to share your views 

Keep up with the Risk Quantum team by checking @RiskQuantum for the latest updates.

Tell me more

Banks divided on op risk approaches

ING trims op risk charge by 11% in 2018

Barclays seeks op risk capital relief

View all bank stories

GRC best practice in a world under constant cyber threat

By Commercial Editorial | Advertisement | 7 August 2019

In one guise or another, cyber risk has topped Risk.net’s annual ranking of the top 10 operational risks to financial institutions each year since 2013. IBM explores how intelligent applications can allow firms to better manage their workloads, minimising cyber risk and the potential for data breaches

News of high-profile breaches or hacking attacks appear with depressing regularity. In May, Bank of America suffered a $375 million loss as the result of a malware attack targeting a number of US banks, joining a long line of financial firms falling victim to cyber crime in recent months. 

As the scale and sophistication of cyber threats grow, global regulators are increasingly concerned about the wider risk to the financial system, and firms’ ability to take a proactive and pre-emptive approach to ensuring business continuity in times of trouble. ‘Operational resilience’ is the newest buzz phrase in town.

In its June 2018 Financial stability report, the Bank of England revealed plans to set impact tolerances for disruption from cyber events. And, in the US, the Office of the Comptroller of the Currency identified cyber security and operational resilience as key areas of focus for supervision in 2019. 

For risk and compliance professionals, the discussion is turning to how to integrate cyber concerns effectively within a resilient governance, risk and compliance (GRC) framework and align people, processes and systems around a common risk language.  

“Cyber risk has traditionally been owned by IT and rarely integrated within risk management processes,” said Judith Pinto, managing director at Promontory Financial Group, introducing the topic at a specially convened IBM roundtable at the OpRisk North America conference in New York in June. 

Pinto noted that the regulatory spotlight on this area and the change of emphasis towards greater executive accountability means boards are demanding deeper insight into cyber and resilience measures within their organisations, consistent with other operational risk data. 

That’s easier said than done, as business silos continue to hamper efforts towards an integrated risk framework. 

“You have folks with different languages – they have different terms, different processes for issue management and risk management. They’re not evaluating risk in the same way,” Pinto said. 

One roundtable participant from a boutique US commercial bank agreed this was a particular hurdle: “IT have their own risk assessments and policies, compliance have their own and then you have all these streams within risk management. It’s not just the bank that’s siloed – sometimes the risk department is siloed too,” she said.

 

Same problems, new solutions

As compliance teams search for new solutions to a familiar problem, many are turning to specialist GRC systems and technology to engage end-users and encourage closer alignment. 

Patrick Batson, senior GRC solution consultant at IBM Watson, explains that such tools are intended to help GRC leaders impose a common risk framework, monitor activity on an enterprise-wide basis, and save the costs of multiple implementations.  

“Having a single platform enables a consistent view,” he said. “That’s what we all want, but to get there a lot of things need to happen. Convergence needs to happen on a common risk framework – even agreeing what risk is. Then there are a lot of questions about how you approach the governance.”

Most roundtable attendees had introduced at least one GRC tool in their organisations, either developed in-house or from a third-party provider. One executive from a top-tier US investment bank was particularly positive about the impact of such tools, while pointing out that user engagement could not be taken for granted. “We want to provide a common risk picture, but you have to find something the business teams are going to care about,” he explained. 

“The GRC tools have given us the ability and rigour so if anyone wants to track vulnerabilities in the system, they have to adhere to certain standards and communicate in a single way. That’s very beneficial. If we don’t have it in the GRC tool it won’t get done.”

For others, slow implementation has dampened initial enthusiasm. The operational risk lead for the investment arm of a large insurer attending the event describes her experience as “a bit of a crawl”. 

“Expectation started really high but [two years into the GRC tool’s existence] we’re at the level where it’s an action-tracker,” she said.

A majority of attendees also confessed to using two or more GRC tools, indicating that the transition to an integrated solution and single point of truth is likely to remain a longer-term goal.

 

The big data challenge

Problems with poor-quality, unstructured or incompatible data can cause particular delays and frustration in the drive for consistent analysis and reporting.  

“There’s a big challenge in users using GRC tools,” said the Comprehensive Capital Analysis and Review (CCAR) lead for a multinational investment bank. “The data being developed for CCAR is not always fully coherent and we have to use different technologies to aggregate and manipulate the data we get from the GRC tool.” 

New rules governing data privacy – such as the European Union’s General Data Protection Regulation or the California Consumer Privacy Act – add another layer of complexity to an already difficult task.  

Kregg White, risk and compliance solutions specialist at IBM, recognises the data challenge as familiar territory, commenting that data normalisation programmes are an essential and increasingly regular starting point for client engagements in this area.

“One of the most frustrating things is a lack of clarity on business requirements and the lack of self-examination about the processes you might want to conduct within GRC before you adopt the tech.” 

White cautions that technology should not be seen as some sort of silver bullet and that successful implementation relies on a combination of factors. “It’s people, processes and technology, right? Everybody wants to skip the first two and get to the tech, but you can’t do that. Adoption is extremely important but preparation is the key to success,” he said.  

The same principles apply when considering how best to integrate cyber and business continuity measures and reporting into these frameworks, Promontory’s Pinto added. “Have you identified the critical business functions? Do you have sufficient people and can they be in different locations? If the process itself hasn’t been designed to be resilient, it doesn’t matter if the tech is resilient because there are too many events where people are impacted but the tech can be up and running.”

 

A clean, agile data flow

Sharing several tales of aborted projects and expensive implementation mistakes, participants noted the frequent disconnect between developers and end-users – which, according to IBM’s Batson, can only be solved with full engagement and time commitment from key stakeholders. 

While patience is a virtue when it comes to implementation, the move by many firms to agile development methodologies is having a positive impact in helping end-users track progress and see results quicker.

“I don’t think we realised how much time we needed to spend with the technologists,” confessed the global conduct risk head for a European investment bank, discussing the development of the firm’s GRC tool. “After we got that flow [from agile] we really did build something that has clean data and a single taxonomy. The metrics we now have are the metrics used across the bank. It’s accurate and attached to the live risk register and we’re able to report on it.” 

Given the increasing volumes of data needed to fulfil the growing roster of GRC responsibilities and reporting requirements, the adoption of cognitive and artificial intelligence (AI) technologies also signal an opportunity to drive efficiency in the future. 

While recognising the benefits of next-generation technologies such as IBM Watson to reduce the burden of manual tasks and improve data quality, cost remains an issue. One executive described her organisation as doing a “manual Watson at the moment”, tagging data such as conduct breaches in line with an established taxonomy. 

“It’s a cost-benefit analysis we have to do to decide where [AI and cognitive] makes sense,” added another. “For some parts of the organisation it absolutely makes sense, for others it’s a bit harder.”

As GRC professionals adapt to the new regulatory demands, it is likely more firms will come to rely on intelligent applications to help manage their workload, minimise risk and support the resilient frameworks needed in tomorrow’s environment.

 

Accelerate insights and increase efficiency through applied innovation and domain expertise to make timely, risk-aware decisions at IBM RegTech.

Barclays seeks op risk capital relief

By Louie Woodall | Data | 6 August 2019

Barclays is petitioning the Prudential Regulation Authority (PRA) to lower its operational risk-weighted assets, with executives claiming their current treatment is out of sync with their UK peers.

In Q2 2018, the lender chose to move off the advanced measurement approach (AMA) for calculating op RWAs allowed under current Basel rules and use the standardised approach (SA) instead. But it disclosed it had “conservatively elected” to keep its op RWA amount unchanged following the switch, at £56.7 billion ($68.9 billion). It has not budged since.

The banks’ finance chief, Tushar Morzaria, said on an August 1 analyst call that it is in discussions with the PRA on “removing the floor that was introduced in our operational RWAs”. He did not specify what this floor was. Barclays’ chief Jes Staley said on the same analyst call that had its op RWAs been accounted for in line with its peers, its common equity Tier 1 (CET1) capital ratio would have been 60 basis points higher than the 13.4% disclosed for end-June. 

This implies that Barclays believes its actual op RWA level should be around £44.1 billion, 22% lower than present.

Barclays declined to comment further. The PRA did not respond to a request for comment by press time. 

Op RWAs make up 18% of Barclays’ total RWAs, compared with 11.2% on average across the four other large UK banks – HSBC, Lloyds, RBS and Standard Chartered. 

Who said what

“Our CET1 ratio increased by 40 basis points to 13.4%, demonstrating the strong capital generation achievable by the bank. In point of fact, if our operational risk-weighted assets were accounted for more like our UK peers, then our CET1 ratio would have actually stood at roughly 14% today” – Jes Staley, chief executive officer at Barclays.

What is it?

Basel II rules lay out three methods by which banks can calculate their capital requirements for operational risk: the basic indicator approach; the SA; and the AMA. The first two use bank data inputs and regulator-set formulae to generate the required capital, while the AMA allows banks to use their own models to produce the outputs.  

Why it matters

The PRA does not impose a floor in the SA when setting Pillar 1 minimum capital requirements. Therefore, the “floor” Barclays refers to is likely the last op RWA amount calculated using its now-defunct AMA model.

The European Union’s Capital Requirements Regulation states that an institution may only revert to a less sophisticated approach for op risk if it has shown its supervisor that it is not doing so to reduce op risk capital requirements, that it would not harm its overall solvency and is necessary because of its “nature and complexity”. A petitioning bank also needs its supervisor’s express permission to make a switch. 

Barclays clearly thinks its op risk has receded since it made the transition to the SA, but is unable to unilaterally post an RWA reduction because of the PRA. Perhaps as a condition of junking the AMA, the watchdog forced the bank to maintain its op RWA level for a certain amount of time, or until the bank could evidence that a reduction was warranted. 

Now Barclays is trying to negotiate down its op RWA level. If successful, it should be able to cut its Pillar 1 capital requirement. However, executives admit that its Pillar 2A add-on would likely increase in response. Whether this would cancel out any Pillar 1 reduction is unclear.

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights.

Op risk capital amounts often appear to result from haggling between firms and their regulators. If you have an idea of what it’s like around the negotiating table, feel free to share your insights by emailing louie.woodall@infopro-digital, or sending a tweet to @LouieWoodall or @RiskQuantum. You can also get in touch via LinkedIn.

Tell me more

Op risk past is prologue for UK banks

European banks junk op risk modelling

PRA's Pillar 2 add-ons reflect mixed verdict on UK banks

View all bank stories

Basel III op risk method a stronger guard against losses – EBA

By Louie Woodall, Abdool Fawzee Bhollah | Data | 5 August 2019

The fully-loaded version of the Basel III operational risk framework would have done a better job of covering actual losses incurred by European banks than the current regime, a regulatory study shows.

Using data from 146 banks, the European Banking Authority found that net losses overshot present op risk required capital on 10 occasions between 2015 and 2017. Had the incoming Basel III rules been in force, there would have been just three such overshoots.

The EBA also found that a watered-down version of the Basel III framework, one that does not factor in banks’ historical losses, would have led to six overshoots. 

Loss amounts in excess of required capital would have been much smaller had the fully-loaded op risk framework been in force, too. Under this regime, the peak loss-to-capital ratio in 2015–17 would have been 180%, compared with 297% under the lightweight version and 412% under the current rules.

Just 0.5% of banks would have had their entire op risk capital exhausted by actual losses had this been set using the fully-loaded Basel III method, compared with 2% under the modified and current methods.

What is it?

The EBA report looks at the effect that Basel III reforms to the operational risk framework would have on regulatory capital and contains policy recommendations to the European Commission on how these should be implemented.

Basel III will replace the existing three methods of calculating op risk capital – the basic indicator approach, the standardised approach and the advanced measurement approach – with a new revised standardised approach (SA).  

The SA will use total income to divide banks into three size buckets. A separate business indicator (BI) multiplier is applied to each bucket to produce the business indicator component (BIC). An internal loss multiplier (ILM) is then applied to the BIC to factor in a bank’s historical op risk losses. If the average annual losses of the previous decade, multiplied by 15, equal the BIC, then the ILM is set equal to one. If this loss number is greater, then the ILM is greater than one, and if lower, less than one. 

However, Basel gives national regulators discretion to exclude banks’ loss histories from the calculation methodology, fixing the ILM at one.

Why it matters

The EBA has recommended to the European Commission that it scrap the discretion to filter out loss histories from banks’ op risk calculations permitted under the Basel-agreed standards. 

Looking at the data collated from the quantitative impact study, it’s easy to see why. With the ILM set to one, loss coverage would only be marginally better than under the current framework. Why would European authorities allow national supervisors to permit their banks to hold less capital than may be needed in an op risk crisis?

This means op risk capital charges are likely to increase by a hefty amount under Basel III. In a previous study, the EBA estimated that op risk required capital would increase 45% in aggregate across European banks. 

But the watchdog also recommended a phase-in of the new standardised approach, especially for smaller banks, to smooth the transition to the higher charge. This should prevent a cliff-edge effect that could have a bunch of banks rushing to issue additional capital all at the same time to cover their expected shortfalls.

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights.

Do the EBA’s findings prove that the updated framework is better tailored to banks’ op risk profiles? Share your take by emailing louie.woodall@infopro-digital, or sending a tweet to @LouieWoodall or @RiskQuantum. You can also get in touch via LinkedIn.

Tell me more

CVA exemption in Basel III could save EU banks more than €18bn

Op risk capital to jump 45% for European banks under Basel III

Basel III: final op risk framework leaves banks guessing

Big banks to bear brunt of Basel III reforms in EU

View all regulator stories

Banks queasy over idea of building cyber trust

By Steve Marlin | Features | 2 August 2019

They agree that sharing intel on cyber threats is a good thing. But that doesn’t mean they want to

Pooling intelligence on cyber trespassing is the future of digital defence – at least, that’s what many in the op risk camp are saying. Platforms to disseminate that information are being set up by consortia of banks, sometimes even with government involvement.

Yet despite their avowals, it’s not clear how much banks want to share. Beyond the most rudimentary details of a breach attempt, banks may hold back for fear of ending up in the crosshairs of regulators – or of actually helping cyber thieves.

“If you are part of a closed group and nothing leaks out, that would be hugely beneficial,” says Andrew Sheen, a consultant and former operational risk executive at Credit Suisse. But if the information does get out, “cyber criminals just move on to someone else”.

With cyber dominating banks’ top operational risk concerns, a handful of large-scale efforts are under way to combine what is known. The range of threats is considerable – distributed denial-of-service attacks, viruses, malware, phishing – and online attacks can do spectacular damage in a flash, in contrast to slow-burning risks such as fraud.

In one form or another, cyber intel sharing has been going on for years – but usually only on an informal basis. An executive at a bank under attack, or simply in the know, might reach out to a handful of trusted colleagues at other institutions or to another affected bank and quietly give them the particulars.

“We saw someone suspicious wiring a fraudulent account,” recalls a cyber risk executive at a major US bank. “Instead of telling the community ‘look out for the account’, what you’ll [do is] call … the next bank in the chain and say: ‘We’ve seen this activity – you may want to see whether there’s issues with that account.’ It ends up being ad hoc and peer-to-peer.”

Banks could use broad industry information to connect the dots. Sheen tells of a bank where a threat was detected at one of its subsidiaries. By the time the breach was sealed, the attackers had moved on to a second subsidiary. But it was only when the attackers showed up at a third that the bank finally saw the pattern.

If you are part of a closed group and nothing leaks out, that would be hugely beneficial. [But if the information does get out,] cyber criminals just move on to someone else

Andrew Sheen, consultant

“Anything that stops that from happening, so you don’t just learn from your experience, has to be hugely beneficial,” says Sheen. “There has to be a better way of learning than what’s happening to you.”

What banks need is “timeliness of information, a real ‘so what?’ use test,” says Sam Lee, head of operational risk for Europe, the Middle East and Africa, at the Sumitomo Mitsui Banking Corporation. Also needed is “a realisation that risk management is about a real-time ‘how do we stay out of trouble?’, rather than a factory-like set of processes”.

The difference can be seen in banks’ recent shuttling of cyber risk experts from IT departments into frontline risk management roles. Banks have been recruiting high-profile chief information security officers, many of them ex-government employees, such as Andy Ozment at Goldman Sachs and HSBC’s Buck Rogers.

“Most intelligence sharing to date has been human to human. It’s very unstructured,” says Samir Aksekar, a cyber risk consultant and a former cyber security executive at JP Morgan in Singapore. “Companies are realising that they need to share information in a more organised fashion.”

But will they let their guards down enough to do that?

Networks in progress

The co-ordinated denial-of-service attack that hit the US financial sector in 2012 jolted both regulators and banks to the critical need to share information. Several industry compilations of cyber crime are under way.

Bank consortium ORX is building a cyber risk network to share news of attacks and best practices. With some 30 banks participating, the project has working groups on how companies define and categorise cyber risk, how to manage an attack, and how to know you’re under attack (in industry-speak, ‘key controls and risk indicators’).

In one form or another, cyber intel sharing has been going on for years – but usually only on an informal basis

Definitions are seen as central to greater intel circulation: without common names for threats and patches, banks might be more confused than helped by shared information. For instance, one bank might characterise a threat as external fraud, while another might call it a distributed denial of service. One working group of about a dozen banks is looking to specify what information is to be shared, such as how often strikes occur and what damage they cause, and will develop data standards to capture that information. The group hopes to be sharing information from early next year.

Other platforms have the explicit backing or direct involvement of regulators. The Financial Sector Cyber Collaboration Centre, a group founded last year by the UK Finance trade association in collaboration with UK regulators, is developing an information-sharing network focused on attacks, vulnerabilities and strategies. The Bank of England in June 2019 hailed its creation as underscoring the importance of teamwork in the face of evolving cyber threats.

The Financial Services Information Sharing and Analysis Center, meanwhile – a global industry consortium headquartered in the US – was set up following a presidential directive in 1998, with hubs in both London and Singapore. More recently, FS-Isac set up FSARC (Financial Systemic Analysis & Resilience Center), a consortium of financial services firms that focuses on resilience under cyber siege.

In Singapore, the FS-Isac Asia Pacific Regional Analysis Centre serves 80 financial institutions headquartered in 16 countries.

But FS-Isac’s Singapore outlet was set up in collaboration with the Monetary Authority of Singapore. Though FS-Isac insists it does not allow MAS’s supervisory unit or any other regulator’s supervisory arm to be present on its intelligence sharing platform, to some banks, the mere involvement of watchdogs in such initiatives sets alarm bells ringing.

The trust fall

In general, banks have been wary of divulging that a cyber thief has scaled their firewalls – in particular, divulging this to regulators that could take them to task for the unprotected spots in their shields.

Sheen, who spent almost a decade in prudential risk policy with the UK regulator before joining Credit Suisse, argues vociferously that regulators should not be present on threat intel-sharing platforms.

Andrew Sheen, consultant

“I feel that information sharing is best done as a non-regulated, private initiative. That’s partly because my experience as a regulator is that firms really do get very concerned about sharing information,” said Sheen, speaking at an industry event on operational resilience and cyber risk on July 24. “I can see a situation where firms might be very concerned about sharing quite specific information with their regulators that might roll on to a capital add-on later. So the involvement of regulators there, I’m not convinced it’s useful or helpful.”

One op risk executive at an international bank went further.

“The negative is, whenever you point out weaknesses, you’re disclosing vulnerabilities. The regulatory environment doesn’t allow you to address things upfront,” he says. “If regulators would allow you to address things without retaliation, then companies would be more open to sharing.”

In 2018, the Hong Kong Monetary Authority also started its Cyber Intelligence Sharing Platform. But a year on, deputy chief executive Howard Lee admitted recently that “utilisation is yet to pick up”.

In the meantime, FS-Isac is trying to nurture trust.

“Members can engage with smaller circles of trust created for specific segments within the financial services sector,” says Steven Silberstein, the group’s chief executive officer. “FS-Isac also provides opportunities for members to meet face-to-face in closed door meetings, allowing them to share in a trusted and collaborative environment.”

The FS-Isac service is the primary mechanism within the US for information sharing. To build trust, FS-Isac members must adhere to a traffic-light protocol – a mechanism limiting how and with whom information can be shared. In addition, when members submit information, they are given the option to do so anonymously: FS-Isac will only attribute information if a bank first gives consent.

The type of information shared could be an ‘indicator of compromise’, or a sign of an active intrusion, how the attackers are getting in or disrupting the bank’s business, and what the bank did to stop it – what worked, and what didn’t.

Trust seems to be building, says Brian Hansen, head of Asia-Pacific for FS-Isac.

“The centre has seen a shift in financial institutions’ approach to threat intelligence information sharing. They are increasingly coming together to share,” says Hansen, who previously worked at the US Defense Intelligence Agency.

The US Federal Reserve is also aware of the sensitivities.

Unlike other types of risk, cyber risk moves quickly, and can reach any of the financial institutions that we supervise, so being able to share information is critical

Jason Tarnowski, Federal Reserve Bank of Cleveland

Jason Tarnowski, vice-president of risk supervision, surveillance and analytics at the Federal Reserve Bank of Cleveland, also leads the cyber intelligence and incident management team for supervision for the entire Fed system. His group monitors and assesses cyber threats for banks under the Fed’s purview. As needed, the team shares findings with the other federal banking agencies, as well as with the private sector.

“We do maintain confidentiality from a supervision standpoint, but we do share information, so we can ensure resiliency within the financial sector and maintain public confidence,” says Tarnowski.

Knowing what threats are abroad is one of the best defences: “Unlike other types of risk, cyber risk moves quickly, and can reach any of the financial institutions that we supervise, so being able to share information is critical.”

The legal sequel

Yet another issue is what can be legally disclosed. The European Union’s General Data Protection Regulation, the US’s Patriot Act and other laws could exact steep penalties for disclosure of identifiable personal information without consent.

But FS-Isac says that GDPR does not necessarily impede the sharing of information, and that the knowledge gained on what has happened could even bolster consumer privacy.

“Threat intelligence may be shared if there is a legitimate business purpose for doing so and because such sharing supports the purposes of privacy laws by enhancing the protection of an individual’s personal data,” the group says in a statement.

Lawyers may also become alarmed at the prospect of losing control of privileged information, were a matter to end up in court. Technical details – whether it was a malware or a phishing incursion and what part of the business was attacked, for instance – are pieces of information that might need to be kept in-house.

“Are we getting into fraud, money mule, other types of account data that may be accompanying a cyber attack? You can say: ‘I see this malware installed in a system.’ But if you have information about accounts established to facilitate fraud, that gets to be far more complicated, because you’re talking about client and customer data,” says the cyber risk executive at the US bank.

In the meantime, social ties still rule what information gets out. Robin Hobbs, head of risk management at brokerage BCS Prime Brokerage in London, fully backs the idea of sharing, and, in an old-school approach, runs an informal group for other London brokers’ risk managers, who meet periodically.

“The idea is that if BCS is being hit with a load of clever phishing attacks, the rest of the Street is as well,” he says. “If we can share that information, it helps all of us.”

By working as a phalanx, all banks could become more difficult targets and improve their ability to blunt any attack. But for now, trust remains a work in progress.

“Platforms are good, but they’re not going to get you to the level of trust you need,” says the cyber risk executive at the US bank. “It’s one thing to say: ‘This is a bad IP address.’ But it’s another to say: ‘On this date, I saw this IP scanning my perimeter, looking for ways to execute a denial-of-service attack.’”

Editing by Joan O’Neill and Tom Osborn