Citi, JP Morgan settle Sibor rigging claims; Europe matches US on AML fines. Data by ORX News
The largest publicly reported op risk loss in November was $95 million paid by Societe Generale to the New York State Department of Financial Services for anti-money laundering (AML) and compliance deficiencies.
The regulator first identified failings in SocGen’s compliance and AML programmes in 2009, and ordered the bank to make improvements. These were successful between 2009 and 2013, but in 2014 onwards the bank’s compliance efforts declined “precipitously”, according to the DFS. SocGen received an unacceptable rating for its compliance function in four consecutive examination cycles.
Failures included weak oversight, governance and internal audit, and deficiencies in procedures around suspicious activity reporting, transaction monitoring and customer due diligence. The flaws culminated in November’s consent order with the regulator, as part of which the bank will pay $95 million.
SocGen also suffered a $1.34 billion op risk loss in November in the form of a settlement with US authorities, including the DFS, over sanctions breaches. This penalty is a continuation of previously reported sanctions provisions, so it is classed as a legacy loss and is not included in November’s overall tally.
In the second loss, Citi agreed to pay $38.8 million to the Securities and Exchange Commission to settle allegations it mishandled transactions involving American depositary receipts. ADRs are securities traded in the US that represent shares of a foreign company. For all issued ADRs there must be a corresponding number of foreign shares held by a custodian outside the US. In some cases, it is possible to undertake pre-release transactions, which occur when the foreign securities have been issued but not yet delivered.
According to the regulator, Citi provided ADRs for thousands of pre-release transactions when neither brokers nor customers held the corresponding shares to support the new ADRs, in violation of the pre-release agreements. Citi also kept some of its pre-release transactions open for more than five days, even though its policies said they should be delivered promptly.
The third-largest loss is from US mortgage lender Home Loan Center, which was ordered by a US court to pay $28.7 million in damages over the sale of bad mortgage loans between 2002 and 2007 to Residential Funding Co. Following Residential Funding Co’s bankruptcy, in 2013 its successor ResCap Liquidating Trust filed a lawsuit against Home Loan Center for selling Residential hundreds of loans which were below the standards agreed by the two companies. Home Loan Center was sold by parent LendingTree in 2012.
Fourth, Aetna Life Insurance Company was ordered to pay $25.6 million after a court found it acted in bad faith by refusing to cover proton beam therapy for a cancer patient in 2014. A jury found that Aetna’s doctors had spent insufficient time reviewing the case before denying the patient’s claims.
Finally, Canada’s TD Bank agreed to pay $18 million to settle a derivatives class action lawsuit brought by investors on behalf of TD Ameritrade, TD Bank’s subsidiary. The lawsuit alleged that TD Bank had disadvantaged TD Ameritrade when the two firms jointly acquired Scottrade Financial in 2017 because it reduced the price it paid to purchase Scottrade’s banking division, leaving TD Ameritrade to pay more for the remainder of Scottrade.
JP Morgan and Citi agreed to pay $11 million and $10 million respectively to settle allegations in a US class action that they conspired to manipulate the Singapore interbank offered rate (Sibor), the Singapore dollar equivalent of Libor.
The banks were accused of colluding to submit artificially high or low rates to benefit the positions of their traders. As part of the settlement, JP Morgan and Citi have agreed to co-operate with the complainants in their case against the other defendants in the case, which are the 17 other Sibor panel members including Bank of America, Deutsche Bank and UBS.
The latter half of 2018 has revealed a sequence of money laundering cases affecting banks in Europe. In September 2018, ING Bank agreed to pay a record European fine of €775 million ($880 million) for AML violations, and further details came to light of Danske Bank’s own €200 billion scandal. Most recently, Deutsche Bank was raided over money laundering allegations in relation to the Panama Papers disclosures.
These events appear to herald a shift from previous years, where the US saw the majority of AML losses. European regulators appear to be clamping down on AML in 2018. ORX News data shows that between 2014 and 2017, AML fines in western Europe and the UK totalled almost $214 million, compared to $1.96 billion in the US. However, in the first three quarters of 2018, fines in western Europe and the UK have already reached $918 million – almost matching US fines totalling $1.04 billion for the same period.
Although this is a sizeable increase, 84% of the total for 2018 is attributable to the single penalty imposed against ING. Nevertheless, European regulators have steadily increased the number of fines in recent years, from three in 2014 to nine in 2017 and 2018. This will potentially continue as banks including Danske, Deutsche and Nordea face investigations in the region.
The lack of a common regulatory framework may explain why European authorities have imposed fewer penalties than those in the US. The European Banking Authority delegates responsibility for AML compliance and enforcement to national regulators, whereas in the US, the Bank Secrecy Act is implemented across all 50 states.
Another factor may be the dispersion of euro cash clearing activity across Europe. Business is split between the UK, France and Germany – meaning that no one regulator has comprehensive oversight of all banks’ activities. In the US, most clearing happens in New York under the purview of the New York State Department of Financial Services, which has consequently levied a third of all US AML fines between 2014 and 2018.
Things look set to change, however. The Danske Bank scandal has triggered calls for an EU-wide AML body to enforce rules and provide resources to countries and regulators. In September, it was reported that the European Central Bank, the European Banking Authority and the European Commission had circulated a confidential AML discussion paper to national governments and the European Parliament, addressing a lack of collaboration by EU countries and their regulators, and inadequate oversight by the EU.
Andrea Enria, EBA chief and soon-to-be head of the ECB’s supervision arm, said in October that recent violations in AML and counter terrorism financing required an EU-level response. He added that the EBA would review supervision in all Union member states with the aim of introducing a consistent approach across Europe. Enria also called for more resources and greater clarity on the EBA’s powers.
The record fine imposed on ING may therefore set a precedent, rather than a high water mark, demonstrating that EU regulators are becoming tougher on AML violations, resulting in larger monetary penalties for firms that flout the rules.
Editing by Alex Krohn
All information included in this report and held in ORX News comes from public sources only. It does not include any information from other services run by ORX and we have not confirmed any of the information shown with any member of ORX.
While ORX endeavours to provide accurate, complete and up-to-date information, ORX makes no representation as to the accuracy, reliability or completeness of this information.
Operational risk-weighted assets across the ‘Big Four’ Australian banks rose A$9.6 billion ($7.1 billion) in the fourth quarter of the year.
Westpac posted the largest increase – at 26.8% – with op RWAs jumping to A$39 billion from A$31 billion in the third quarter. Commonwealth Bank of Australia (CBA) saw its op risk edge up 2.4%, from A$56 billion to A$58 billion.
National Australia Bank’s (NAB) and ANZ Bank’s op RWAs were relatively stable on the quarter, at A$37.5 billion and A$37.6 billion, respectively.
Year to year, op RWAs have swelled A$32 billion (23%) at the Big Four. CBA’s increased the most, by A$24 billion (71%), followed by Westpac’s, which grew A$7.9 billion (25%). ANZ experienced a slight increase of A$313 million (0.8%) and NAB a decrease of A$75 million (0.8%).
RWAs are used to determine the minimum amount of regulatory capital that must be held by banks. This minimum is based on a risk assessment for each type of bank asset. The riskier the asset, the higher the RWA, and the greater the amount of regulatory capital required.
All four Australian banks use the advanced measurement approach (AMA) for calculating op RWAs. This is based on a loss distribution methodology, which observes the frequency and severity of past op risk losses, and measures how much capital banks should set aside in case of reoccurrence.
Earlier this year, the Australian Prudential Regulation Authority (Apra) applied a A$1 billion op risk add-on to CBA’s minimum capital requirement after having identified “a number of shortcomings in CBA's governance, culture and accountability frameworks, particularly in dealing with non-financial risks”. This resulted in a $12.5 billion increase in the bank’s op RWAs.
The add-on remains today, and will only be removed with Apra’s permission. The bank created an independent reviewer to report to Apra on its progress resolving the deficiencies it identified, and issued its first report in October. A follow-up is expected by the end of the year.
As for Westpac, the bank said the increase in its op RWAs was due to the introduction of a model overlay “to approximate the standardised approach”. This sounds like the bank is clearing a path for the eventual switchover to the revised Basel standardised approach for op risk, which will replace the AMA in January 2022. Barclays and BNP Paribas did something similar this year, too.
New working group will focus on business continuity in the age of cyber threats
The Basel Committee on Banking Supervision has assembled a working group committed to keeping the business of banking humming even in the event of cyber intrusions or just technical snafus.
With typical stealth, the committee set up an operational resilience working group to study “issues related to cyber risk and broader operational resilience”, says the Basel website.
Information has been scant. But last week, at a Basel conference of banking supervisors held in Abu Dhabi, Lyndon Nelson, deputy chief of the Prudential Regulation Authority (PRA) at the Bank of England (BoE), detailed the working group’s brief, whose first task is to “identify the range of existing practice in cyber resilience, and assess gaps and possible policy measures to enhance banks’ broader operational resilience going forward”.
He said it was founded at the beginning of 2018 and aims to provide “a more concrete and specific understanding of the main trends, progress and gaps in the pursuit of cyber resilience in the banking sector”.
A spokesperson for the Basel Committee did not offer further comment on the group’s efforts when contacted.
Its formation, though, may signal a shift in Basel’s thinking on operational risk. Its operational risk working group was disbanded in 2016 after banking regulators opted to abandon the advanced measurement approach – a complex method ultimately seen as too free-wheeling – in favour of the more straightforward standardised approach for measuring operational risk.
Since then, it appears operational risk is shedding its skin of quantifying and capitalising losses to reveal a new layer, ‘operational resilience’ – the ability to rebound from cyber attacks or other disruptions. And the concern with the lurking, present threat of technology gone rogue or haywire is palpable across risk management.
“The development at Basel doesn’t surprise me,” says Jimi Hinchliffe, chief executive of NJ Risk and Regulatory Consulting in London. “Operational resilience has been the main game in town for the PRA for some time, so banks in particular have been focused on this as a key priority. The Financial Conduct Authority has also joined the party more recently.”
The institutions also jointly issued a paper in July focusing on improving resilience in the wake of incidents such as the Royal Bank of Scotland’s 2012 outage in its Irish operations, which ended up costing it £56 million ($88 million); and the blocked services this year at TSB, which will cost it £20 million. In its paper, the BoE aims to establish minimum service levels following incidents like these.
Some believe the UK financial authorities may have encouraged Basel to take this direction. In either case, the emergence of the Basel operational resilience working group is viewed by some as recognition that after the shift to the standardised op risk framework, the purpose of the op risk working group had largely vanished.
“Most firms have had operational risk frameworks in place for over a decade now past Basel II, so the frameworks should be reasonably robust now,” says Hinchliffe, who was a regulator at the UK Financial Services Authority and led the Basel II implementation project within wholesale firms from 2006–08. “Given the decision to drop the advanced measurement approach last year, there probably isn’t much need for an op risk working group focused on the framework and minutiae.”
There seems to be an underlying message with the move to the standardised approach and demise of the operational risk working group that operational risk is less important. Is this really the message the Basel Committee wants to send?
Risk executive at a large European bank
Others, however, see the de-emphasis on operational risk modelling as a mistake. Companies have invested heavily in building out their op risk modelling capabilities following the 2008 financial crisis, they note, adding that the new focus undercuts those efforts. The standardised approach – a traditional, backward-looking method – makes no provision for the possibility of catastrophic cyber risk.
“There seems to be an underlying message with the move to the standardised approach and demise of the operational risk working group that operational risk is less important. Is this really the message the Basel Committee wants to send?” says an operational risk executive at a large European bank. “This does little to reassure me that the Basel Committee really understands the importance of operational risk.”
Other op risk executives have been fostering a perception of resilience around tech in their companies. Cyber attacks are displacing conduct as the main operational threats, they say.
Conduct-driven risks, such as the mortgage-backed securities scandal years ago and the more recent mis-selling practices, are becoming less frequent because of the heavy penalties that follow.
“Ten years ago, if you started such a scam, nothing happened,” says an operational risk executive at a second large European bank. “Now, people are getting sacked and going to jail for such offences. Banks have way more governance in place.”
Industry data backs that up. The average size of operational risk losses plummeted to €206,000 in 2017 from €665,000 in 2012, revealed a report by ORX based on reports from its members.
In 2012, the 10 largest op risk loss events accounted for 35% of total loss. By 2017, they had fallen to 15%. This reduction followed the drop in fines and settlements over misdeeds during the crisis.
“Operational resilience reflects a shift towards regulation which is focused on the impact that a bank has on its customers or the wider market, rather than its own financial stability, ie, capital,” says Luke Carrivick, director of analytics and research at ORX.
The shift was apparent when looking at “impact types” during ORX’s work with banks to develop a new op risk taxonomy, adds Carrivick.
Banks are focusing on reducing more frequent but lower-impact incidents, such as cyber breaches, instead of trying to contain outsized legal settlements and calibrate op risk capital.
“You simply take operational risk capital as a God-given number you can’t influence much,” the second op risk executive says of life under the standardised approach. “It simply comes from a Basel formula, and it’s up to the business owner to get rid of the risks.”
But others warn against complacency on conduct risk. Banks have developed a maze of rules and policies to avert rogue behaviour that may actually make it harder to stop.
“Conduct became the cause célèbre for regulators post-crisis in the UK, and many firms responded by constructing whole new conduct risk edifices, with convoluted new conduct frameworks headed by people with grandiose new titles,” says Hinchliffe. “The result in many cases was duplication, inefficiency and confusion, and the misconduct continued.”
UK banks’ rosy performance in the Bank of England’s stress tests was helped along by lower stressed misconduct charges, which were nearly half in the 2018 round what they were the year prior. Actual misconduct charges reported by six of the seven participating firms at end-2017 were also half their prior year level and have continued to trend lower in 2018.
The BoE wrote that stressed misconduct costs – legal and regulatory expenses incurred over and above loss provisions – totalled £25 billion ($32 billion) under the 2018 severe stress scenario at the seven participating firms: Barclays, HSBC, Lloyds, Nationwide, RBS, Santander UK and Standard Chartered. This was down from £40 billion the year prior.
Actual misconduct charges, the cash banks put aside to absorb expected legal and regulatory expenses, were £6 billion at end-2017, which reduced the pre-tax profits of the firms by a fifth.
Each of the participants save Standard Chartered disclose misconduct charges in quarterly and annual disclosures. Together, the six firms set aside £5.3 billion for these charges in 2017, down from £10.4 billion in 2016 and £14.8 billion in 2015. The reduction reflects the banks' expectations that they will incur lower misconduct costs in future years.
RBS reduced misconduct charges the most of the set, to £1.3 billion for full year 2017 compared with £5.9 billion in 2016. As of September 30 this year, misconduct charges totalled £1.2 billion.
HSBC reported a negative charge for 2017, meaning it released cash held in reserve to cover legal and regulatory matters back into net income, of £268 million. Its 2016 charge was £552 million. Up to the third quarter of this year, the charge was £644 million.
Barclays and Santander both curbed misconduct charges only slightly in 2017, to £1.2 billion and £393 million from $1.4 billion and £397 million, respectively. Santander reported lower charges for the first nine months of 2018 of £62 million, and Barclays higher charges of £2.1 billion.
Lloyds and Nationwide both marginally increased charges in 2017 compared to 2016, to £2.5 billion and £136 million from £2.1 billion and £127 million, respectively. Charges have, however, reduced substantially over the last three quarters, to £550 million at Lloyds and £15 million at Nationwide.
The aggregate Common Equity Tier 1 drawdown reported by the seven stress-tested banks attributable to misconduct charges for the 2018 round was 1%, down from 1.7% in 2017, 1.6% in 2016, and 1.4% in 2015.
Misconduct costs as defined by the BoE are provisions taken against operating income. The BoE uses data supplied by the stress-tested banks directly for its analysis and to calibrate its stress tests. The above data is taken from quarterly and annual reports.
The UK stress tests consist of credit impairment, traded risk and misconduct components. The latter element projects losses due to legal and regulatory failings for each participant bank in excess of end-year provisions over a five-year time horizon.
The BoE pinpointed lower projected misconduct costs as a key reason why the 2018 stress test results turned out so favourably for the seven participants.
Why were the stressed costs lower? One big factor was the settlement of a number of big conduct cases over the past year, including Barclays’ £1.4 billion and RBS’ £3.7 billion settlements with the US Department of Justice over the misselling of retail mortgage-backed securities. Now these costs are behind the two banks, they were filtered out of the BoE’s forward-looking stress projections.
However, the central bank has previously stated that misconduct costs are tricky to quantify, and that even where they have materialised or look likely to materialise, it's possible the ultimate charges for these will exceed estimates.
Maintaining high misconduct provisions, therefore, may be the most prudent course for banks to take, despite their deleterious effect on earnings. Not only will these help burnish future stress test results, they will also protect banks’ core capital from expensive settlements and charges in future.
Are lower misconduct provisions a cause for celebration, or do they imply that banks' capital is at risk from sudden, expensive legal penalties? Let us know your thoughts by emailing email@example.com or tweeting @LouieWoodall or @RiskQuantum.
Industry moves to revise out-of-date categories that feature risks such as cheque fraud
In 2001, the Basel Committee set its classification scheme for operational risks. Among the threats it listed was cheque-kiting, a form of fraud that siphons money available between the time a cheque is deposited and when it clears.
But even by 2001, cheques had begun their slow move to the sidelines – online payments were only just starting to gather momentum.
Today, cheque-kiting is an anachronism, and a wistful reminder that Basel’s taxonomy needs to be updated – or scrapped.
“People barely use cheques any more, let alone recall what cheque-kiting is,” says an operational risk executive at a global bank. “The bottom line is: this taxonomy isn’t fit for purpose.”
As a result, many banks today run two taxonomies: an internal one tailored to their particulars, and another mapped to the Basel categories – just in case regulators ask.
“We have our own taxonomy, which is more detailed and comprehensive than the Basel taxonomy, which is consistent with other firms on the Street,” says an operational risk executive at a large international bank in New York.
A taxonomy provides a baseline for quantifying operational losses. Being able to categorise a loss as internal fraud, model errors or an IT glitch, for instance, provides clarity on what precisely went wrong, and how to address it.
A general taxonomy also makes it possible to compare lapses across institutions, allowing banks to see how they compare against their peers and what dangers are brewing in the industry as a whole.
Into the breach has stepped the Operational Riskdata eXchange Association, a private consortium of banks and insurers that focuses on operational risk. The association has been pulling together a ‘reference taxonomy’ that expands on Basel with a contemporary suite of risks: cyber, tech, conduct, regulatory and compliance among them.
Organisations are telling us that as their focus becomes more about managing operational risk and less about measuring it, the taxonomies of the future will be geared toward more active risk management
Luke Carrivick, ORX
ORX has been developing its taxonomy over the past year after surveying its 98 member institutions for what they wanted in a replacement for Basel. And a big part of what they want is to see what might be coming in order to head it off at the pass.
“Organisations are telling us that as their focus becomes more about managing operational risk and less about measuring it, the taxonomies of the future will be geared toward more active risk management,” says Luke Carrivick, head of analytics and research at ORX.
ORX’s project substantially supersedes a taxonomy project several of its member banks – including JP Morgan, HSBC and Barclays – had been working on together, according to a senior op risk manager at one of the banks involved.
In the past, a taxonomy’s main purpose was to model risk for capital planning – to know how much to set aside to cover operational risk. But that’s no longer enough – and with the revised standardised approach for op risk capital being ushered in by Basel III, the role of a bank’s internal taxonomy in dictating how its losses are mapped and aggregated is set to decline in importance.
The overarching theme that emerged from ORX’s consultations with its members was that independent risk teams are working closely with their business divisions, and need a taxonomy that helps them see threats to be ready for them. The principles that ORX assembled were written to be accessible to the broadest number of a bank’s employees, to reflect changes in the risk management field and to be used as a reference by individual companies and the industry as a whole.
“The work done by ORX is aimed at understanding the changes that institutions have made and collating them in a coherent way,” says Guenther Helbok, head of operational and reputational risk at UniCredit Bank Austria and an ORX board member who oversees the taxonomy. “This evolution in the risk taxonomy will encourage a level of industry convergence.”
There are doubters, though. While praising ORX’s professionalism and agreeing it’s well qualified to develop a standard taxonomy, some ask whether a taxonomy developed by ORX – whose members are mostly large banks – makes sense for smaller banks.
“ORX do good work. If the industry is going to develop one taxonomy, I would have thought ORX best placed,” says the global bank executive. But an ORX taxonomy, he added, “may not be appropriate for smaller banks”.
Carrivick of ORX says its growth has lately come from smaller institutions, and regardless he maintains the framework is suitable for both.
“It isn’t intended as a prescriptive taxonomy to take away and use as is, it’s reference, not standards – so appropriateness is about how useful it serves as a benchmark,” he says. He adds it is “just as relevant to a smaller bank as a bigger one. The idea is that banks would pick from reference as appropriate to their business”.
If ORX are coming out with something, great. But I’m asking the question: why do you want to do this? If it’s to promote your external events, that’s fine, but it doesn’t mean every institution has to use your specific categories
Head of op risk at the London office of a large global bank
Another sceptic, the head of op risk at the London office of a large global bank, says if ORX does become the de facto industry standard, that’s fine, so long as companies realise it might have proprietary reasons for doing so.
“If ORX are coming out with something, great. But I’m asking the question: why do you want to do this?” he says. “If it’s to promote your external events, that’s fine, but it doesn’t mean every institution has to use your specific categories.”
If the industry wishes to compare data, it will need to settle on a common taxonomy, and ORX believes it’s well placed to do this. Banks are developing their own taxonomies in addition to Basel, ORX says, but are doing so largely in isolation. While there is a fair degree of commonality across the data, there is wide divergence in the taxonomies themselves.
Banks have not sat idly by waiting for Basel to update its taxonomy. Instead, each has developed its own, using Basel as a starting point, either by marshalling its broad categories for risks that did not exist in 2001, or by adding new categories and subcategories to it.
For instance, the head of operational risk at the London office of an Asia-based international bank has developed a taxonomy to classify the risks of technological change and the threat of cyber attack. How? He reviewed the major op risk events at global systemically important banks over the last 20 years and ‘mapped’ them onto one of the Basel categories.
For example, he uses the Basel category ‘Business disruption and system failures’ for IT risks such as the 2013 malfunction of Goldman Sachs’s electronic trading system, which placed 16,000 erroneous options trades on major exchanges.
For another incident, he created a brand-new category – ‘fraudulent exploitation of algorithms’ – to cover the 2014 manipulation of bond prices with electronic trading algorithms by a former Bank of America Merrill Lynch trader in London. He linked this new category to Basel’s ‘External fraud’. He also filed the 2017 data theft at Equifax under external fraud.
The op risk chief likens the creation of a new risk taxonomy to the work biologists do when they categorise a new life form – they start with the categories they already have. In a similar manner, the op risk head reviewed hundreds of tech and cyber risk losses and categorised them, illustrating each with specific examples.
He believes IT and cyber are the themes that will dominate op risk in the future. They are not, however, new risk categories – IT and cyber cut across the existing seven Basel categories.
At the London office of the large global bank, the head of op risk recalls that when he created a risk taxonomy for his previous employer, a large North American institution, he created 15 or 16 separate risk categories. Some of them – internal and external fraud, for instance – were part of the Basel taxonomy, while others weren’t, such as information security and business continuity.
“When I created the taxonomy, I didn’t ignore the Basel categories – I mapped to them,” he says. “Basel has cheque-kiting and other subcategories that aren’t relevant.”
The London subsidiary updates its taxonomy quarterly, creating new categories when necessary. In the past few months, it’s created three, all on cyber and based on the Basel categories: internal fraud, external fraud and business disruption. As the new subtypes are developed, the bank maps them back to the Basel categories.
The Basel working group was disbanded almost as soon as it wrapped up its work on the standardised measurement approach in 2016 – which some took as a signal that operational risk wasn’t high on the regulatory agenda. The Basel Committee declined to comment on whether it planned to update its 17-year-old taxonomy.
“Regulators need to have a common taxonomy. As a regulator, you need to be able to look at data across firms,” says the executive at the global bank. “Do we care that our taxonomy is different than another bank’s? Not in the least. But regulators want to be able to compare like with like.”
But the rationale for a taxonomy goes beyond appeasing regulators: it tracks the relationship between risks and losses. Without clear definitions, an operational risk loss, for instance, could be miscategorised as a credit or market risk loss.
For example, the failure of a system that monitors credit risk exposures could cause a bank to enter into trading positions that increase its credit risk to an unhealthy level. The cause is operational, but the fallout is seen in credit risk.
“There’s always a challenge to assigning a risk type to an incident,” says the operational risk executive at the large international bank’s New York subsidiary. “Having a simple straightforward risk taxonomy that is up to dat and consistent is valuable.”
Such miscategorisations could even be intentional if they’re aimed at reducing operational risk-weighted assets (RWAs), which determine a firm’s op risk capital. Unlike other types of RWAs, op risk RWAs can’t be reduced until the loss has been removed from a bank’s loss history – usually a minimum 10-year wait. Credit or market risk RWAs, in contrast, can be slimmed down fairly quickly and simply.
The difference creates a potential incentive to label op risk losses as something else. Eyebrows have been raised within the op risk community at JP Morgan’s apparent categorisation of the $6.25 billion London Whale loss in 2012 as a market risk loss, despite a regulatory probe blaming the event on weak internal controls and the deliberate manipulation of risk models, among other things.
In 2015, ORX’s news site categorised the total loss as market risk, with a smaller amount – the $1.02 billion in regulatory penalties – ascribed to operational risk. ORX says that the news site categorised the losses based on publicly reported data, and that it does not know how JP Morgan characterised the losses internally. Two users of the firm’s private member database – which users submit actual loss data to anonymously – say no loss record exists within it that would match the London Whale losses if they were categorised as op risk losses, however. JP Morgan declined to comment.
ORX says its news site’s categorisation was reviewed by its Definitions Working Group, which concluded that the London Whale trading loss should be considered a market risk loss and the regulatory fines an operational risk loss. The working group’s view reflected a consensus of a wide range of companies belonging to the group, the firm says.
The benefits of in-house taxonomies, mapped to ORX or Basel, are clear: firms can make their taxonomies as detailed as they want, while the mapping ensures a common set of risk types and losses that can highlight operational risk across the industry
Start-up advisory firm Quant Foundry has built a taxonomy that uses machine learning to map a bank’s businesses, processes and controls. By populating the model with actual loss data, it’s possible to pinpoint the cause of loss events with great precision, the company says.
“Banks collect data, but they’re not using that data to manage risks,” says Chris Cormack, a founding partner of Quant Foundry in London. “Our approach is to build a data model that allows banks to understand how risks may have arisen.”
One of the arguments for updating the Basel taxonomy is that it’s not written in language that bank executives would use to describe their risks.
“We all need to be speaking the same language,” says Matthew Moore, vice president of operational risk management at Deutsche Bank in New York. “When we are using different data to say the same thing, it takes away from the time we could be spending on managing risk.”
Operational risk executives say the ORX taxonomy offers a way for banks to speak the same language, both inside their institutions and out. And with a common reference point, whether set by Basel, ORX or some other industry consortium, any incident could be reported as an example of a specific risk type, which could then be aggregated to form an industry statistic.
“It’s important that when we have certain types of incidents that we know that we’re talking about the same thing,” says Moore.
Still, the benefits of in-house taxonomies, mapped to ORX or Basel, are clear: firms can make their taxonomies as detailed as they want, while the mapping ensures a common set of risk types and losses that can highlight operational risk across the industry.
In developing taxonomies, banks have to strike a balance between being accessible and being thorough. If the taxonomy takes too simple a view, it might not cover a wide variety of losses. If on the other hand it’s too detailed, it might be incomprehensible to all except op risk experts.
“The most important thing is to identify new risks, add them to the taxonomy and avoid the urge to overcomplicate things,” says the executive at the large international bank’s New York subsidiary.
Additional reporting by Tom Osborn
UK cyber incidents increased 18% over past 12 months, despite apparent confidence in systems
The UK’s Financial Conduct Authority has recorded a 138% increase in technology outages at financial firms and an 18% rise in cyber incidents, according to one of the regulator’s top officials.
“On the basis of the data that the FCA is currently collecting, we see no immediate end in sight to the escalation in tech and cyber incidents that are affecting UK financial services,” said Megan Butler, executive director of supervision at the FCA, in a speech yesterday. Butler, who was discussing the regulator’s latest survey of cyber incidents, said the FCA was “extremely concerned” that the number of incidents reported to it had increased.
But there were several caveats surrounding the new data. The rise in the number of incidents does not necessarily suggest a “surge in cyber attacks”, she says.
Instead, firms are now reporting incidents “more robustly” and choosing to report events, while in the past they might have been inclined to sweep the issue under the mat. However, there are still signs of under-reporting, the FCA says.
In 29% of the 186 reported incidents, firms did not inform the regulator of the specific root cause of the incident, and the regulator remains in discussion with them to obtain further information.
While the FCA is concerned about the latest data, its core worry centres on how firms are preparing and reacting to cyber events. The “true test of the resilience of UK finance is not the absence of incidents”, Butler said. “It’s how well incidents are managed.”
According to the FCA’s latest data, 20% of the reported incidents were as a direct result of weaknesses in change management. This was the most frequent cause of outages over the past 12 months.
“We are worried that a lot of firms seem overly confident about their ability to manage flagship IT change programmes and keep their systems up to date,” Butler said.
In the FCA survey, both large and small businesses listed updating and managing IT change programmes as strengths, but the regulator believes firms could be “ignoring dangerous information”, suggesting a mismatch between reality and corporate expectations.
“Leaders don’t appreciate the level of risk, or else they overestimate their abilities,” Butler said. “And this overconfidence bias does seem to be particularly characteristic in financial services.”
The FCA wants senior management to understand processes so there is sufficient in-house capacity when things go wrong, and for staff not to be afraid to question the stability of their systems. Instead, financial services IT technology is largely dominated by outsourcing.
“Chief information officers command armies of semi-permanent contractors or unregulated third parties,” Butler says. This limits in-house capabilities and demands oversight of external parties.
And yet, according to the FCA’s survey, only 66% of large firms and 59% of smaller ones say they understand the response and recovery plans of their third parties.
Cyber has grown in prominence for financial institutions in recent years, with most financial institutions now listing it as their top concern for the years ahead. As a result, there has been an industry push to use new technology to reinforce defences and systems against a hack.
“The most mature sectors (in terms of the cyber capabilities of large firms) are non-bank payments, retail banking and wholesale banking. In that order,” said Butler, citing the survey results.
The least mature are wholesale markets, retail investments and retail lending. Among smaller firms, general insurance and protection are the most mature, and retail investments the least. In future, the FCA believes, firms should alter their focus from technological investment to human investment.
“Computers are perfectly neutral regarding their output. It is your people who decide whether to use them for a specific reason and what the purpose of that is,” Butler said. “If we fail to educate and support people, and an employee then triggers an impact, is that their issue? Or did their employer fail to provide them with the support they needed to perform their role?”
According to the survey, 90% of firms told the FCA they operate a cyber awareness programme. But, at the same time, many businesses struggled to identify and manage “high-risk staff” – those who handle critical and sensitive data.
This article originally appeared on Risk.net’s sister website, CentralBanking.com.
Financial firms are increasingly adopting the three lines of defence framework to manage risk. But how has the model evolved to date and what does the future look like for this key risk management tool?
Establishing and maintaining clear roles and responsibilities is one of the biggest challenges organisations face when developing a three lines of defence (3LoD) framework for risk management – a vital part of creating a robust foundation that can evolve and adapt to change. Awareness, education and understanding are crucial throughout all three lines, but particularly the first.
“The most common execution risk in implementing line-of-defence frameworks is a lack of clarity on roles and responsibilities,” said David Canter‑McMillan, vice-president, function head of operational risk at the Federal Reserve Bank of New York, during a recent IBM-sponsored Risk.net webinar.
A first-line control unit can help in this respect by ensuring expectations are understood and met – answering why the second line needs certain information, for example. “They must be the catalyst to help the [second line] get that [information] – that is a key role of the first-line control unit,” said Kevin Krueger, vice-president, markets group at the Federal Reserve Bank of New York, who also took part in the webinar.
At a number of organisations using the 3LoD model, subgroups have developed within the first line in response to communication challenges. Sometimes referred to as the ‘1b line’, they typically consist of those that do not necessarily own or directly control risk, but are part of a functional team for which risk management is a primary responsibility. As such, 1a refers to the control owner – the supervisory or management layer responsible for delivering the steps that control the risks in question. While such subgroups may not always be useful, depending on the organisation there are ways to turn this development into a positive. In either case, it is something organisations should monitor closely.
Christophe Delaure, senior product manager at IBM, asserted during the webinar that, while a 1b or 1.5 line could mitigate communication issues around understanding the roles and responsibilities of each of the lines, it could also indicate a problem or even lack of confidence within the first line. “It potentially shows a deficiency or a complexity level that’s too high for the first line,” he says.
“There is always a risk that a subgroup will lead to a drain of accountability from the first line,” added another panellist, Anna Hardwick, chief control officer, global operations at HSBC. “But, as long as you are aware of this trap, you position to avoid it and you are clear about accountabilities, the two can actually work in harmony.” In fact, she believes the 1b line can be a function of structural concentration and expertise that ensures key checks and balances are completed to hold the first line to account. “If the balance between the two is right, this can work very well,” she said.
For organisations that get the balance right, further development of the framework will not stop there as they encounter more change – particularly from disruptive technologies such as artificial intelligence. Greater use of automation in general within the governance, risk and compliance function could simplify processes throughout all three lines, but organisations must be ready for such changes by ensuring their frameworks are robust and flexible enough to evolve with such developments.
According to IBM’s Delaure, automation is likely to affect organisations in two important ways when it comes to the 3LoD model of risk management. In addition to replacing the manual work of IT risk and compliance, and establishing a set of controls for implementation across the organisation, automation can also provide “expertise at everyone’s fingertips”.
For example, cognitive or natural language processing technology could be used to communicate the knowledge gathered by the second line to the first line. “Mostly today, we see a very manual process, with the second line training the first,” he said. “But this [technology] can instead be embedded across the organisation within the systems and processes, as long as the user interface is simple.”
What could this mean for the organisation and for the future decision-making processes of its leaders? Within the 3LoD structure, the nature of the control environment will change because of automation, which will affect the roles and responsibilities of the control owners in the first line in particular.
“[The first-line control owners] will move from an environment of heavy functional knowledge and human experience that is relied upon to understand how a control works … to a set of very complicated processes and an ‘under-the-hood’ system project,” said HSBC’s Hardwick. As such, the first-line control designers and operators will, to some extent, become technologists and data scientists. “Everyone has to be ready for that because the ability to know your processes are doing the right thing [will] become a lot more complicated in a different way,” she added. “Organisations must employ people and organise and adapt their framework to respond to that change.”
By creating clear avenues of communication and widespread understanding of the purpose and requirements of the 3LoD, particularly within the first line, organisations can adopt new developments confidently. This will not only empower the first line, it will provide a robust but flexible framework for a sound risk management strategy and a solid foundation to face future change.
The IBM-sponsored Risk.net webinar How to upgrade your first line of defence is available on demand.