Life’s a breach: banks settle uncomfortably into GDPR

By Steve Marlin | Features | 23 April 2019

A year into exacting data privacy regulation, ramifications are becoming more tangible

The European Union’s General Data Protection Regulation, possibly the most draconian privacy shield currently in existence, has companies that do any business in Europe flailing to comply. Its scorching fine for those that don’t report data breaches within three days: a 4% cut of global revenue.

But it’s not just private entities that are straightening up since the regulation went into effect last year – nor is it just data breaches that need to be addressed.

Public authorities are peppered daily by a hail of cyber attack attempts, many emanating from Russia and China. But the number of possible breaches is actually small, and most are either contained or can be isolated quickly.

Government entities have another problem: queries from the public. Under GDPR, anyone can ask to see their personal information. If it no longer serves any purpose to the entity holding it and if the person so desires, they can ask that it be purged under the so-called right to be forgotten. If they are not satisfied with an institution’s response, they can file a complaint with the Information Commissioner’s Office (ICO), which has aroused a good deal of curiosity and some confusion.

“The nature of the complaints is usually misinformed members of the public thinking that we hold this massive Big Brother data, when the data we actually hold is quite rudimentary,” says an official familiar with the matter, who requested anonymity.

For economic reports, for example, a central bank might hold personal data such as the amount of a mortgage, the holders of the mortgage, along with demographic data such as the mortgage is for a two-bedroom flat and is held by a couple in their mid-30s, he adds: “Some members of the public think that we’re using this data to spy on them, and complain to the ICO.” 

It is not clear what penalties central banks might face for mishandling its duty to inform citizens. But the long-tailed implications of GDPR are just coming into sharper focus as the regulation sinks in.

It was widely understood that banks would have to lay out large amounts of cash to ensure they stayed within its wide-ranging boundaries. “We have spent more than $2 million to implement GDPR internally, and you will find similar figures at other banks,” says an adviser at a large international bank.

The many trapdoors of GDPR 

Financial institutions are doubling down on information security, compliance and legal protections to avoid what happened to Google, which in January got hit with a €50 million ($56.3 million) fine by the French data protection authority. The reason? Its data consent policies were neither transparent nor accessible, the authorities said, and therefore fell foul of GDPR.

But banks, because of their global footprint and their massive stores of data on both people and corporations, are exquisitely sensitive to GDPR.

In the event of a breach, the bank must inform both the customer and the data protection authority when, where and how the breach occurred. This means very quickly determining the location of the data, its purpose (whether it’s being used with customer consent or for regulatory purposes) and whether it’s being controlled by the bank directly or by a vendor, which could be in another country.

American companies had a deer-in-the-headlights reaction. Companies don’t know how to look for things in big data

Ariel Silverstone, Data Protectors

One large European bank has a dozen projects under way to assess what it needs in order to comply with GDPR and make the necessary adjustments, says its data protection officer. Most other banks are in the same boat.

The adviser at the international bank says most of the $2 million investment in GDPR has gone to establishing systems to manage its data systems. The bank’s data is stored around the world at multiple branches, subsidiaries and vendors. The bank therefore decided to build its own software to track the flow of data.

“We had to build up a data landscape from scratch,” he says. “In the past, it was cheaper to simply amass data, rather than have retention policies. Now, we are forced to delete data, and we have to provide evidence not only to the customer, but to the authority.”

Privacy officers need to exercise judgement on what constitutes a breach: not all need to be reported. For instance, if a laptop containing customer data is left on a train, but the data is encrypted, this would be deemed a lower risk than if the data were unencrypted. In contrast, if a set of documents on its way to being shredded were stolen from a van, that would be reported, and considered more serious. 

“If the impact is significantly high, then it needs to be reported,” says the data protection officer.

The supply chain represents a million tributaries. While GDPR has ‘model clauses’ that specify ‘major suppliers’, there are many other suppliers that could be problematic.

Permission to use data is another slippery area. If the purpose of collecting the data has changed, the original consent may no longer be valid. If so, a company needs to demonstrate that it still has permission to use the data.

The potential for class action lawsuits also hangs in the air. GDPR allows consumers to seek redress for, among other things, ‘mental anguish’ caused by a data breach.

“Under GDPR, you can elect frustration reasons,” says the adviser at the international bank. “If you feel mental pain, you are allowed to seek damages.”

Compared with other industries, banking has a head start: relatively strong information security and data protection policies were already in place. Even so, the sector is spending heavily. In 2018, financial firms spent an average $1 million on GDPR compliance, and non-financial companies spent only $250,000, says advisory firm Gartner.

Under the 1995 data protection directive that GDPR replaced, each EU country enacted its own laws, resulting in a chequerboard of requirements across jurisdictions. Which breaches were deemed reportable varied widely. Under the new regulation, the threshold has been raised.

“Banks are now obliged to notify the authorities wherever there is significant risk,” says Michael Kaiser, a spokesman for Germany’s Hesse Data Protection Commissioner, which includes the banking capital of Frankfurt. “Under the former German data protection law, the obligation existed only where a very high risk was assumed.”

One regulation, a wide-ranging number of breaches

The European Data Protection Board reported a total of 206,326 cases across the EU since GDPR’s launch up to the end of January. Of the total, 64,684 were data breaches and 94,622 were complaints. Data authorities imposed €56 million in fines, presumably inflated by the €50 million Google fine.    

The numbers varied by country. The UK’s ICO received more than 8,000 breach notifications in just the first six months of GDPR. France saw 1,170 in 2018, and Germany has had around 1,200 since GDPR went live, while Austria received 551 last year.

The Netherlands was an outlier, with a lofty 20,881 data breach notifications in 2018, more than double the year before. The financial sector accounted for 26% of the cases.

Why so many breaches in that particular country? The Netherlands has had mandatory breach reporting in place since 2016, and its companies may therefore be spring-loaded for the tougher GDPR, said one lawyer who specialises in cybersecurity.

“Dutch companies may have had a head start, better awareness of the legal requirements, existing processes and procedures for breach identification and breach reporting,” said Françoise Gilbert, co-chair of the data privacy and cyber security practice at law firm Greenberg Traurig in San Francisco.

In general, the number of reported breaches could also reflect companies’ grasp of requirements, how the local authority conveys the requirements or “how vehemently it has prosecuted” for violations. She notes also that a lot of breaches are reported unnecessarily, “out of fear or ignorance, even though an incident might not meet the threshold, because companies are afraid of being prosecuted”.

In the UK, Elizabeth Denham, the information commissioner, said pointedly in a speech in December 2018 that the purpose of GDPR was not just to uncover breaches, but to get companies to be responsible for what they did with data: “If, within the 72-hour time limit, a UK organisation has no clue as to the who, the what, the how of a breach, then it is clear that they do not have the required accountability in place – which is a requirement of the law.”

For €600,000, there was never a business case to fine-tune your data-deletion policies. Now, with the fines, the business case has exploded

Punit Bhatia, GDPR expert

As banks are sorting out GDPR, some face other regulations that bear on data privacy. One large international bank in Malta has established compliance teams for not only GDPR, but also for anti-money laundering [AML] and payment service directives.

“There is tension between the obligations under AML and GDPR,” said David Cauchi, head of compliance at Malta’s Office of the Information and Data Protection Commissioner. “The timing of GDPR was not ideal because they are in the midst of other compliance challenges.” 

In November 2018, the European Commission ordered Malta to redouble its efforts against money laundering, after the European Central Bank shut down a Maltese bank on allegations of fraud and money laundering.

Last year, data breaches from cyber attacks cost financial firms $935 million worldwide, ORX News data shows.

But even so, the number of breaches caused by cyber attacks is relatively small, despite popular perception. Although hackers, rogue states and organised crime try to climb banks’ cyber walls daily, most of them are thwarted. Kaiser, the spokesman for the Hesse Data Protection Commissioner, has seen only three cyber attacks on the banking industry, none of them successful.

But whether or not attacks hit the mark, banks took fright at France’s penalty on Google, the first large enforcement of GDPR and for a matter unrelated to breaches.  

Compared with the previous directive, the level of fines under GDPR could be so big that even mundane incidents need to be investigated.

In Belgium, for instance, the maximum fine under GDPR’s predecessor directive was €600,000.

“For €600,000, there was never a business case to fine-tune your data-deletion policies,” says Punit Bhatia, a GDPR expert and author based in Brussels. “Now, with the fines, the business case has exploded. They will gladly spend €10 million to avoid a €100 million fine.”

Besides fines, breach notifications are likely to become public.  

“When a breach is disclosed to affected individuals (eg, to customers of a business), that could become ‘public’ because at least one customer is likely to comment about the incident on publicly accessible social media,” said Gilbert of Greenberg Traurig.

Damage to the brand could end up hurting more than a large fine. The public or regulators may also question whether the company provided reasonable security for their data – that is, a level of security commensurate with the sensitivity of the data.

“The concern about shame and reputation may create a significant incentive for increasing their security budget,” says Gilbert of companies holding data.

California cousin

California, tech matrix of the US and birthplace of unicorns, has often been ahead of the rest of the country on social issues. Data privacy will soon be one of them.

The state last year passed a law with some similarities to GDPR, yet milder. The California Consumer Privacy Act will enable people to find out what personal data a company has held on them over the previous 12 months, and will allow them to sue a company for lacking safety measures that ultimately result in a breach. The law’s maximum penalty is capped at $7,500 for each intentional violation.

The law’s protections will apply only to California residents and govern only companies conducting business in the state. But, given the size of California’s economy and the centrality of the tech sector, the law – as with GDPR – could well have global impact. It is due to come into effect on January 1, 2020.

Proposals are already afoot to make filing suit easier and expanding the data covered to passport and biometric data. The California law already allows for class action lawsuits, as does GDPR.

“GDPR opens the door for class actions. There is a similar framework for class actions in the US,” the adviser at the international bank tells Risk.net. “I am very concerned about this issue.”

The adjustment to GDPR might be tougher for US companies. European companies have had ample time to get used to the idea of it. The previous directive had been on the books since 1995, and GDPR wended its way through negotiations for five years.

For US companies, it’s a different story. It is probably no coincidence that GDPR’s biggest fine so far hit a Silicon Valley titan. The cultural differences on opposite sides of the Atlantic are stark. In the US, private property holds a strong grip on the national ethos, and big data is a mantra on the west coast. In Europe, concern for the privacy of its citizens and general group welfare can more often best corporate interests.

“In America, companies believe they own the data. In Europe, data ownership is a basic human right,” says Ariel Silverstone, external data protection officer at Data Protectors in Huntington Beach, California.

Now, the US techs have had to come to terms with combing through their hoards of data to comply with GDPR.

“American companies had a deer-in-the-headlights reaction. Companies don’t know how to look for things in big data,” says Silverstone.

Stoplight system could spot imminent meltdowns – study

By Alexander Campbell | News | 22 April 2019

Proposed traffic light system sifts through transaction data for signs of trouble

A traffic light system of risk indicators for financial market infrastructure could use transaction data to detect threats to financial stability in almost real time, according to research due to be published later this year.

Ron Berndsen, chairman of LCH’s risk committee, and Ronald Heijmans, a senior data scientist and policy adviser at De Nederlandsche Bank, argue in their upcoming paper that monthly or quarterly monitoring of market infrastructures such as payment and settlement systems involves too much delay.

Instead, alerts for supervisors and operators should follow a traffic light system – red, amber or green – that would light up in near real time as risk thresholds are breached.

Using transaction data from the European Union’s Target2 real-time gross settlement system from 2008 to 2017, Berndsen and Heijmans constructed indicators for operational risk, concentration risk and liquidity dependence – three risks highlighted by the Bank for International Settlements’ Principles for Financial Market Infrastructures.

Using daily transaction data would allow the indicators to be calculated the very next day – “which is close to real time compared to banking supervision, which does monthly monitoring based on numbers that are several months to quarters old”, Heijmans says. The indicators could even be calculated the same day if intraday data was made available, he adds.

For operational risk, they concentrated on the distribution of transactions over the working day – if too many are delayed to later in the day, the system might not be able to finish processing them all before the close. Target2 guarantees a certain maximum number of transactions a day can be completed, and, Berndsen and Heijmans suggest, if the number of transactions exceeded that level once or more in a month, this should flash a red light for the month; if the level rose to less than one standard deviation below the maximum, that would represent an amber light for the month.

They also suggest tracking the distribution of payments by value over the day. The UK payment system Chaps aims to transfer 50% of total value for the day by noon and 75% before 2.30pm; breaking the first target should flash amber; breaking the second would be red, they say.

Concentration risk refers to a few participants having an abnormally high market share, which could jeopardise the payment system as a whole if they defaulted. Berndsen and Heijmans suggest using the Herfindahl-Hirschman index of concentration, the sum of the squares of market share for each participant, normalised for the number of participants. Mapping this for outgoing turnover for transactions, for centrality and for connectivity produces three possible concentration risk indicators; they suggest using a 99% percentile as the threshold for a red warning and 95% for an amber warning.

And liquidity risk could be measured by comparing turnover to other payment systems – a higher share of total turnover represents increased dependence on the other systems, or a drop in the amount of settlement using central bank money (and thus a drop in total turnover). Tracking the number of 99% and 95% exceptions a month for the total and relative values, and for a moving average, can produce red and amber alerts for liquidity risk as well, they suggest.

Trade-off

Heijmans cautions that the research is not ready to use just yet. “We tried to connect to existing guidelines instead of just making up our own. This makes it more concrete, as it is used in practice, and not so hypothetical,” he says. “With respect to the threshold, it really depends on when and how often you want to be informed. It is a typical trade-off between Type 1 and Type 2 errors – false positives and false negatives. Depending on your role – overseer, operator – you may choose different thresholds.”

Regulators and operators already use risk indicators based on transaction data, he says, but a traffic light system is more helpful because it can point to potentially risky changes in the system instead of being triggered only once a stress event is already in progress.

“End users – overseers, operators, management – need to have a good intuition on indicators. If you make it difficult for that type of audience to understand your idea quickly, it will often stay unused. That is why we chose the green, amber, red approach, which is an often used method by board members.”

The same approach could also be used to cover other risks, such as credit risk or collateral risk, he adds.

Modelling cyber losses could get easier – study

By Alexander Campbell | News | 19 April 2019

Cyber losses behaved much like non-cyber losses when grouped by severity, so perhaps less data is needed

Modelling cyber losses might be easier than some practitioners say, an upcoming research paper reveals.

As losses mount from cyber risk, worries about the industry’s ability to model the risk properly have grown apace. Insurers warn of a lack of reliable loss data and even of standard definitions of different types of cyber threats – the latter problem sparking a new project from the US’s Federal Reserve Bank of Richmond.

But after delving into several thousand cyber and non-cyber loss events, a team of researchers at UK-based insurer Aon has concluded cyber risk is less novel, in risk modelling terms, than it might appear. Ruben Cohen, who works in risk management at Aon, had previously used dimensional analysis techniques to show that different categories of operational risk were more similar in distribution than they appeared, and therefore could be lumped together for risk modelling purposes.

The same appears to be true of cyber risk, according to the latest research by Cohen and his colleagues at Aon. The raw datasets – 5,060 publicly reported events, of which 350 were cyber losses, and 1,103 Aon claims, of which 26 were cyber losses – showed very different distributions of frequency and severity.

The researchers therefore transformed the datasets into series of dimensionless numbers, which could be compared. The loss datasets were categorised into bands by severity, and one transformed variable was the midpoint of each band, divided by the overall average severity within the sample. The other variable consisted of the number of losses in each band divided by the midpoint of the band, multiplied by a factor comprising the overall average divided by the overall sample size.

Once the transformed data was plotted, it showed cyber losses behaved similarly to non-cyber losses.

“The convergence observed for all the transformed data is quite remarkable,” say the authors. “The profile of cyber risk might be fundamentally closer than expected to non-cyber risk … perhaps proving once again that operational risk can be characterised by a unique and universal tail behaviour.”

The paper, to be published in September in the Journal of Operational Risk, is the work of Cohen and three other executives at Aon: Jonathan Humphries, an executive director; Sabrina Veau, a director at Aon Benfield; and Roger Francis, a vice-president at Aon’s risk management subsidiary Stroz Friedberg.

The authors also argue there is no need to abandon established Basel taxonomies for non-cyber op risk in order to accommodate cyber. Three of Basel’s current seven categories – ‘data/execution’, ‘fraud’ and ‘systems’ – could form the basis of a common framework for loss-data collection, sharing and analysis. 

The researchers also looked into whether the nature of the cyber threat had changed in recent years. They carried out the same comparison of cyber to non-cyber losses from 2007 to 2014 with more recent losses from 2015 to 2017 – and found no obvious signs of divergence.

The datasets are probably too small to prove reliably that the cyber threat is unchanged since 2007, they admit. But Cohen and his colleagues are nonetheless optimistic about the implications of the study.

They write: “With cyber and non-cyber losses being broadly alike … it can be claimed that exposures could be quantified using only a small set of data. For instance, one should, in theory, be able to derive estimates of risk measures from a small (random) subset of a data sample, rather than having to use an entire dataset.

“This could, therefore, help get around the issue of data insufficiency.”

Cohen told Risk.net on April 17 that the next step should be to repeat the analysis for other sources of loss data: “It would be interesting to apply this method to data obtained from other well-established sources, such as ORX, etc, and see what comes out.”

Smart weaponry aids bank fight against money laundering

By Steve Marlin | Features | 10 April 2019

Advanced algos and machine learning gain credence as regulators encourage innovation

In the struggle against money laundering, banks are on the defensive. Tough laws have forced firms to develop elaborate procedures to detect prohibited transactions. But when these systems fail – as they often do – lenders leave themselves open to crippling financial penalties. US banks faced fines for anti-money laundering breaches of $1.37 billion in 2018, with those in Europe not far behind at $979 million, anonymous industry loss data from ORX shows.

As banks look to develop more advanced tools to root out criminality, their efforts have been undermined by suspicion among regulators over machine learning, and specifically the difficulties in explaining and justifying its use.

In December, though, the US Federal Reserve gave a clear signal that financial firms would no longer be penalised for trying to innovate to tackle money laundering. With fear of regulatory censure partly removed, banks are renewing their efforts to develop machine learning programmes to fight financial crime.

“We are looking at advanced technologies which are machine learning-based to help us detect new behaviour patterns,” says Jayati Chaudhury, global investment banking lead for anti-money laundering (AML) transaction monitoring at Barclays. “Machine learning has the potential to reduce false positives and create more efficiency so resources can focus on what is truly suspicious based on the business and risk appetite of the firm.”

The use of machine learning for AML is not new; financial firms have been developing self-learning algorithms for years. But the tacit acknowledgement by regulators that existing rules-based systems are, on their own, not up to the task of stamping out money laundering has lent extra impetus to the push to explore new methods.

Two of the most common machine learning techniques used in AML models are decision trees and neural networks. The approaches sit at opposite ends of the artificial intelligence spectrum in terms of complexity – but each has advantages for particular aspects of the fight against suspicious transactions.

Decision trees model the factors that contribute to a particular event or outcome. They provide a visual representation of splits, or branches, within a dataset. They can be used to show, for instance, how geography, movement of funds, and different types of payment interact towards the likelihood that a transaction is suspicious.

“Decision trees typically use many more features and provide more granular explanation of client activity compared to rules-based systems,” says David Stewart, director of financial services, fraud and security intelligence at vendor SAS.

More complex, random forest models capable of generating many decision trees allow users to build more predictive machine learning algorithms. Random forests remove the bias that may exist in a single decision tree, and are thought to provide more accurate plots of cause and effect.

Neural networks, on the other hand, attempt to find non-linear correlations in large datasets by using methods that mimic the way the human brain operates, calculating millions of possible combinations before arriving at a result. The idea is to use advanced analytics to find patterns that classical AML systems would miss: whereas older rules-based systems classify transactions according to pre-set criteria such as age, occupation and income, which are determined by analysing existing data, neural networking uses advanced statistical techniques to detect anomalies in behaviour.

For example, a customer could have accounts in a bank’s corporate, correspondent banking and institutional brokerage businesses. Analysing the transaction activity among these different relationships may highlight suspicious activity that had not surfaced previously.

Accentuate the positives

Not only can machine learning help improve detection rates, banks hope, but the technology also promises to slash the incidence of false positives, where transactions are flagged as suspicious but turn out to be legal. Industry sources routinely put the number of false positives in financial crime analytics as high as 90%. Machine learning can reduce false positives by up to a half, according to some estimates.

In contrast to retail banking where it is comparatively easy to track a customer’s transactions, an investment banking client could have multiple relationships with a bank – meaning traditional AML systems tend to generate lots of false positives. For example, companies usually have multiple legal entities both within and across different jurisdictions. The large sums of money that shuffle back and forth between these entities – all legitimate transactions – might be flagged by a rules-based system because they appear suspicious.

In cutting down false positives, banks accept the risk that a small number of genuinely suspicious transactions might slip through the net. But the hope is that traditional systems will hoover up any suspicious activity that machine learning may have missed. For this reason, most banks are looking to develop machine learning alongside existing systems, rather than instead of them.

“A machine learning-based solution may not be sufficient in and of itself and may need to supplement the rule based monitoring to achieve the efficiency target of the institution,” says a senior financial crime expert at a global bank.

Financial firms are still honing the models they’re using to monitor suspicious transactions. For example, Barclays is piloting a machine learning system for its institutional brokerage business. The system can analyse the transactions that take place between a customer and different corporate entities, which could signal a potential money laundering risk. Similarly, Standard Chartered is using machine learning in its Singapore office to screen real-time transactions for money laundering violations.

Although it’s still too early to draw conclusions, Barclays expects the machine learning system to reduce false positives and allow investigators to focus on genuinely suspicious transactions in their suspicious activity reports (SARs). Banks predict that improving the effectiveness of SARs will not only raise the hit rate for money laundering investigations, but also slash the wasted cost of producing false SARs. The annual deluge of these reports has dogged the industry for more than a decade, and a 2017 report estimated that 80–90% of SARs were of no value to law enforcement efforts.

Given the scattergun nature of existing AML detection methods, it is no surprise that banks are looking to machine learning for a more targeted approach. According to an October 2018 report by the Institute of International Finance based on a survey of 59 banks, one-third of firms said they were already deploying machine learning for AML, while a further third said they were piloting such methods.

On the face of it, financial crime would appear to be a classic application where machines could pore over large sets of data and predict the likelihood of unauthorised or illegal activity. The rigid rule-sets that currently power legacy AML systems are less well equipped to detect new behavioural patterns.

One of the hallmarks of machine learning systems is their ability to self-adjust and identify relationships on the basis of new information. A fixed rules-based system can check whether a transaction fits within certain parameters – the amount of cash deposited in a retail customer’s account during a typical month, for example – but over time, the parameters might change: what is considered abnormal behaviour today might become normal behaviour a year from now. Or, criminals may modify their behaviour to adapt to rules in an attempt to circumvent detection.

These are patterns that rules-based system will not be able to detect, but which more advanced behavioural monitoring systems may be able to.

Learned behaviour

Simple machine learning techniques have been in place within banks for some time. To take an example from credit cards, if a customer has never shopped at department store Macy’s, the first time they shop at Macy’s it might be flagged as unusual. But subsequent uses of the credit card in the store will be considered normal and will result in a lower risk of transaction. In a basic sense, the algorithm is said to have “learnt”.

However, many rules-based systems are unable to learn. Such models typically employ a technique called logistical regression, a simple risk estimation method. A credit card company might suspect that when a mismatch exists between the name a customer uses on a transaction and the name on file at the customer’s credit bureau, the likelihood of fraud goes up. It could run that assumption through the model and see if the data supports it. The algorithm might determine that, based on historical data, a mismatch carries a chance of fraud.

Over time, however, the reliability of that assumption could decline – perhaps because users stop sharing information with credit reference firms, and the bureau relies on information that is not up to date.

The moment you let a computer learn and make decisions on its own, you can no longer explain it

Fraud management executive at a global bank

Machine learning-based approaches can be used to create new categories, or buckets, of customers that don’t rely on predefined groupings. Historically, banks would segment customers by demographic criteria such as age, occupation and income, but machine learning is capable of analysing more data in finer levels of detail. It might turn out that grouping customers by hobby or where they go on holiday might have more predictive power.

“When you’re trying to detect abnormal behaviour, you’ve got to group similar objects. The historic way of doing that would be using pre-set rules, whereby we determine what segment you should be in. But with machine learning, segmenting takes in lots more data to determine how customers should really be grouped together,” says Ray O’Brien, global risk chief operating officer and head of global risk analytics at HSBC.

With money laundering, segmenting customers is complicated by the fact that an individual might be doing business via a correspondent bank, for which the primary bank has little information. In such cases, machine learning can be used to create “pseudo customer” segments, based on frequency of transaction, country of origin, or counterparty.

HSBC is working on such applications. The bank has developed around 400 characteristics that feed into its pseudo customer segments. The work has resulted in a 50% reduction in initial alerting as the focus is shifted to a smaller segment of the population, Patrick Dutton, regional head of intelligence, analytics and systems delivery at HSBC, told Risk.net last year.

Machine learning algorithms can create an optimum number of segments that will provide the maximum separation between each segment. For example, segments could be created based on occupation. One segment might contain students and another might contain bankers. Within the student segment, there might be subcategories containing those who come from wealthy families versus those from middle incomes. Within bankers, the subcategories might be investment bankers versus clerical staff. Firms are betting that a more granular understanding of customers will give a greater insight into what’s unusual or suspicious.

Explain your workings

Machine learning exists as a spectrum of activity, from simple approaches to more complex models. An algorithm can be programmed to analyse data within existing parameters, while another can be taught to reinvent itself as new patterns emerge in the data. A senior fraud management executive at a large global bank says: “You can do it as a one-time exercise, where you pull data and learn through algorithm, so you have an algorithm capable of assessing a new event. That’s one thing. The other thing is an algorithm that’s constantly changing and identifying new patterns. That’s not being done widely.”

As algorithms develop, they may reach the stage of unsupervised learning models. When examining hundreds of variables, machine learning models will adjust risk factors according to new patterns they are picking up. If, for example, illegal transactions appear to be concentrated in a geographic region recently, the machine will identify it as a new risk factor.

The increased complexity brings challenges for model governance. When a model starts to learn and make decisions on its own, it becomes harder to explain. An analogy can be made with a chatbox that starts learning words that developers wish it wouldn’t and starts responding to customers in an inappropriate manner.

“The moment you let a computer learn and make decisions on its own, you can no longer explain it,” says the fraud management executive.

Before you deploy a deep neural network, weigh the risk and reward. If you can explain it, then go for it

David Stewart, SAS

Given the correlation between the predictive power of machine learning techniques and the difficulty of explaining their workings, banks face a tricky balancing act in deciding how complex a model should be. The explainability factor has so far inhibited banks from realising the full potential of machine learning.

HSBC’s O’Brien says: “In financial services, we are at the beginning. If machine learning is going to be used where it could affect people, it has to be transparent, explainable, and undergo independent validation.”

Stewart of SAS agrees: “There are some mature methods, like regression analysis and decision trees, that are more broadly understood and easier to explain to an auditor or examiner. Before you deploy a deep neural network, weigh the risk and reward. If you can explain it, then go for it.”

The reticence to explore the limits of machine learning may explain why the models most banks have implemented to date in the fight against AML have been static, in the sense they’re based on coded instructions rather than evolving as new events happen.

“I don’t know of any financial institution that is [solely] relying on automated learning for fraud detection. What I see being used is traditional modelling where you take historical data from fraud cases, and run it through algorithms like logistic regression or decision trees,” says the fraud management executive.

Open to innovation

Recent guidance from the US Federal Reserve, along with the financial crime agency FinCen, has put a new slant on the development of machine learning tools for AML. The senior financial crime expert says: “Given the recent encouragement from regulators to innovate – and not be penalised if an outcome is different than that of current monitoring – I think it is safe to assume that more and more financial institutions are now willing to try new approaches.”

The Fed guidance reassures financial institutions that they will not face “supervisory action” if new techniques such as machine learning reveal deficiencies in existing AML efforts. The regulatory emphasis now is on innovation, rather than strict control.

“All regulators are open to these conversations. Some are more mature in terms of what is the art of the possible,” O’Brien says. “I think that areas like financial crime and internal decision making will use machine learning first before capital models.”

The new guidance does not mean that regulators are taking a light-touch approach to the supervision of AML. If anything, many are seeking to tighten their procedures. The UK financial regulator has announced its intention to ramp up spot checks of financial firms for money laundering compliance. And the European Union is taking steps to unify its AML reporting practices, following the discovery of systemic violations at European banks.

Innovation will need to focus on teaching the machine how to detect prohibited behaviour from available data. With rule-based systems, humans identify ways that people are trying to launder money, build a rule around it, put that rule into a system and construct scenarios that can be fitted to real-world cases. But what about other scenarios that might not have happened in real life, but are still plausible? The machine would need to be sufficiently flexible to search out those as well.

The machine needs a sufficient amount of data to build the algorithm, but how much and what data to use will determine the accuracy of the model. If the sample contains too much data, this can lead to overfitting, where the model becomes so complex that it can’t make accurate predictions for out-of-sample data. Conversely, too little data can lead to underfitting, wherein models working from too-small datasets become a poor predictor of out-of-sample data.

“If the data you use to build and train your algorithm is bad, you'll have an algorithm that makes the wrong assumptions from the beginning. You need to make sure you have decent data quality,” says Adrien Delle-Case, policy adviser at the Institute of International Finance.

Editing by Alex Krohn

Jayati Chaudhury’s views do not necessarily represent those of her employer, Barclays

EU G-Sibs add €2.7bn of op RWAs in 2018

By Alessandro Aimone | Data | 9 April 2019

Systemically important European Union banks’ operational risk-weighted assets grew by €2.7 billion ($3 billion) in 2018, with BNP Paribas and Crédit Agricole leading the charge.

Aggregate operational RWAs across the 11 global systemically important banks (G-Sibs) in the EU stood at €576.3 billion at end-December. Though this represents just a 0.5% aggregate increase year-on-year, it comes off the back of two consecutive years of declines.  

BNP Paribas posted the largest increase in euro terms, with op RWAs jumping to €73 billion from €66.5 billion at end-2017, a 10% increase. Crédit Agricole followed with a €3 billion (12%) rise to €31 billion.

Smaller increases were also reported by Societe Generale, Deutsche Bank and Groupe BPCE, which saw their op RWAs ramp up by €600 million, €400 million and €2 million, respectively.

In contrast, ING posted the largest decline – at 12% – with op RWAs down €4.6 billion to €35.5 billion.

Barclays was the only bank sampled with op RWAs flat on the year, at £56.7 billion.

The aggregate op risk capital charge, calculated as 8% of RWAs, stood at €46.1 billion at end-2018, €214 million higher than the year prior. 

The European Banking Authority (EBA) estimated that EU G-Sib op risk capital would increase by 46.6% in aggregate once Basel III reforms are fully implemented. Applying this estimate to the end-2018 amounts, total op risk capital would be expected to rise €21.5 billion to €67.6 billion total. The new rules are due to be phased in from 2022. 

What is it?

Existing Basel Committee rules allow op RWAs to be calculated under the advanced measurement approach using banks’ own internal models. A standardised approach and basic indicator approach are available for those businesses a bank is unable to cover with the AMA.

At end-2017, the committee scrapped the AMA and replaced it with a revised standardised approach (SA), under which firms will have to calculate their operational risk using the standard-setter’s own formulae.

Why it matters

The aggregate tick-up across the EU G-Sibs was largely driven by BNP Paribas’ decision to set op risk to the level of the regulator-set standardised approach, while the against-the-trend decline at ING was prompted by an internal review of its op risk model.

A wide dispersion of op RWA movements is to be expected, as most EU banks calculate the lion's share of their op risk requirements using the AMA. As this method depends on banks' own models, inputs and scenarios, rather than fixed formulae, a great degree of variation in RWA outputs is natural. 

But G-Sibs' op RWAs may start moving in concert when Basel III rules take effect, which will scrap op risk models for good. The EBA's estimates show the net effect will be a big increase in op risk capital. Whether G-Sibs will simply swallow this additional charge, or try and make offsetting regulatory capital savings elsewhere in their businesses is a €21.5 billion question. 

Correction, April 12, 2019: UniCredit's operational RWAs stood at €29.5 billion at end-2018, and not €2.4 billion, as previously stated. This article has been amended to reflect this.

Get in touch

Risk Quantum has launched a daily newsletter. Sign up to receive the latest data insights.

Will we see more European banks getting rid of their internal models in the coming months? You can share your views by dropping us a line at alessandro.aimone@risk.net, send a tweet to @aimoneale, or get in touch on LinkedIn.

Keep up with the Risk Quantum team by checking @RiskQuantum for the latest updates.

Tell me more

Op risk capital to jump 45% for European banks under Basel III

Banks divided on op risk approaches

At EU banks, bad business practices led op risk losses

Op risk past is prologue for UK banks

View all bank stories

Op risk data: Europe’s AML drive follows Danske and ING slips

By ORX News | Opinion | 9 April 2019

Also: top five losses include mortgage penalty for Citi and reporting fines for Goldman, UBS. Data by ORX News

The largest publicly reported loss in March is a $49 million penalty levied on Citigroup for mortgage failings in the US. The bank denied benefits to mortgage borrowers based on race, colour, national origin or sex, in violation of the Fair Housing Act, according to the Office of the Comptroller of the Currency.

Citigroup provided a loan pricing programme that gave eligible borrowers a credit to closing costs or an interest rate reduction. But due to poor staff training and inadequate review processes, certain customers failed to receive the benefits. The bank self-reported the issue in 2015, and will pay a $25 million fine and $24 million in restitution.

The second largest operational risk loss is a £34.3 million ($45.4 million) fine imposed on Goldman Sachs for failing to accurately report more than 200 million transactions between 2007 and 2017.

The UK’s Financial Conduct Authority found that Goldman breached its duties under European Union rules known as Mifid, in organising and controlling its transaction reporting affairs. The mis-reporting spanned the bank’s change management processes, its maintenance of counterparty reference data and how it tested the accuracy of reports.

In third place is the embezzlement of $40.5 million from CBS Employees Federal Credit Union by one of the firm’s managers. For at least 19 years, Edward Rostohar made payments from the credit union to himself, either online or by forging the signature of a fellow employee on cheques made payable to himself, a Justice Department complaint states.

Prior to working at CBS, Rostohar was an examiner at the credit union regulator, and the experience he gained there had helped him to avoid detection, he said, as he knew what methods of investigation the regulator used. CBS has been forced into liquidation following the announcement of the fraud.

UBS faces the fourth largest loss, in the form of a £27.6 million ($37 million) fine from the FCA for reporting failures under Mifid rules. Similar to the Goldman Sachs loss above, UBS submitted incomplete or inaccurate reports for 140 million transactions between 2007 and 2017. The FCA found errors in UBS’s systems, IT logic and/or reporting processes; weaknesses in change management controls; and weaknesses in controls around the maintenance of reference data.

In an apparent case of customer discrimination, the fifth largest loss sees Brazilian savings bank Caixa Econômica Federal fined 127.4 million reis ($33.4 million) for failing to make branch modifications for disabled customers.

The bank did not meet a 2008 agreement to provide such facilities, and was fined in 2011, at which stage it attempted to negotiate alternative forms of payment, according to local media reports. However, CEF allegedly did not take steps to pay the fines, and now faces a larger penalty.

Spotlight: mutual fund mis-selling

A US regulatory probe has resulted in investment advisers agreeing to repay $125 million to clients, many of them retail investors, for inappropriate mutual fund sales. The Securities and Exchange Commission discovered that advisers had placed investors in mutual fund share classes when lower-cost share classes of the same fund were available. The firms had also failed to disclose conflicts of interest related to the sale of these share classes, such as fees they received. Wells Fargo will make the largest payment, at $17.4 million.

The repayments are part of a 2018 disclosure initiative that encourages brokers and advisers to self-report violations to avoid fines. Another US regulator, Finra, launched a similar scheme in January relating to savings plan share-class recommendations. Like the SEC, Finra will waive fines for firms that self-report relevant supervisory violations by April 1.

The SEC says the initiative allows it to effectively allocate its resources, so it is perhaps possible that other US regulators may follow suit, reflecting a reduced emphasis on direct regulatory action during the Trump era.

In Focus: Europe tightens AML data-sharing

Europe’s main financial supervisors have agreed to standardise the exchange of information relating to anti-money laundering and terrorism financing. The agreement, reached on January 10, will require the approval of the European parliament before it becomes law.

The move follows major AML violations at Danske Bank’s Estonian branch, which triggered calls for an EU-wide AML body, a record €775 million ($900 million) AML-related settlement reached by ING in September 2018, and ongoing high-profile investigations involving Deutsche Bank and Nordic banks Swedbank and Nordea. Moreover, Denmark and Sweden have received criticism over relationships between some high-ranking regulatory staff and bank executives, all of which has called into question the region’s reputation for transparency.

The National Bank of Ukraine also appears to be demonstrating an intolerance of AML violations under its jurisdiction. The central bank fined Ukrsotsbank 30.5 million hryvnia ($1.1 million) in November and Sberbank 94.7 million hryvnia in January 2019 for breaching financial monitoring legislation.

Establishing effective supervision over its financial industry may be part of Ukraine’s bid to join the EU. This year, NBU received the Transparency Award from Risk.net sister publication Central Banking, reflecting significant improvement in internal and external communications, and overall transparency.

Excluding fines, firms have faced high costs to remedy AML failures. Danske has been required to set aside an extra Dkr10 billion ($1.6 billion) in capital, and said it would donate an estimated Dkr1.5 billion of income generated by its Estonian branch to combatting international financial crime. French insurer CNP said improvements to its AML/CTF model would cost €20 million over two years, more than double the €8 million fine it received from French regulators in July.

Editing by Alex Krohn

All information included in this report and held in ORX News comes from public sources only. It does not include any information from other services run by ORX and we have not confirmed any of the information shown with any member of ORX.

While ORX endeavours to provide accurate, complete and up-to-date information, ORX makes no representation as to the accuracy, reliability or completeness of this information.

The future of operational risk management

Advertisement | 8 April 2019

As the efficiency of operational risk management remains a top priority and pressure to maximise value increases, emerging technology could prove crucial. Nitish Idnani, leader of oprisk management services at Deloitte, explores how the oprisk management space could look in the future if it continues its current evolution, and discusses the potential impact of key technologies

The efficacy and efficiency of operational risk management continue to be a major priority in today’s business climate. The ability to demonstrate the value of oprisk management frameworks – with risk managers being increasingly expected to do more with less – is increasing. This pressure is creating an incentive for risk leaders to explore and embrace new technologies and techniques that can help improve their programmes.

Predictive risk intelligence and the use of advanced analytics for pattern recognition, and correlation and causal analysis give oprisk managers a head start on identifying the build-up of potential risk and the need for remedial action. Banks should seize the opportunities made possible by today’s advanced tools and the ubiquity of vast amounts of data. 

Predictive risk analytics, machine learning and artificial intelligence can help efficiently build and mine large and complex datasets that combine traditional Basel Committee on Banking Supervision oprisk loss data with other data sources, including transaction data; non-transaction data such as human resources, compliance and other internal management information; and external data such as sensing data, social media, customer complaints and regulatory actions. These aggregated datasets provide billions of data combinations that can drive improved risk results and insights, and may increase the likelihood of uncovering patterns and correlations that were previously not noticed until too late, if at all. 

Over the past 12–18 months, there have been moves toward predictive risk intelligence. Globally, a greater number of organisations are trying to make their oprisk management programmes more forward-looking. Banks have long been interested in finding ways to enhance their traditional oprisk management practices. Although historical data on operational losses is still the baseline for complying with regulatory capital rules, such data has always been a blunt instrument for controlling loss and risk profiles. In the past, the necessary tools and technologies to make more insightful correlations and predictions did not exist. Occasionally, experienced oprisk practitioners – with help from data scientists – have used intuition to identify patterns between risk profiles, losses and events in legacy models. However, this generally did not happen until long after the event occurred, and was often limited to situations where extreme data variations were clearly visible – situations so infrequent they had no real predictive value.

The main driver of careful design for most data models is to build a foundation that positions an organisation to acquire better intelligence around a subject. Patterns and behaviours can help organisations understand, manage or predict the forces that drive them. Given the nature of oprisk, even predictable patterns and behaviours can be challenging to identify consistently. Now might be the time to revisit the foundation of the traditional oprisk data model – including the data collected.

One instructive way might be to learn from techniques derived from outside risk management, such as customer marketing and sales. These disciplines have well-grounded techniques to help understand customer behaviour to generate additional sales and further build customer loyalty. To create these benefits, retail organisations had to monitor data from numerous sources to understand the full profile, preferences and buying patterns of customer behaviour. This ranged from monitoring and understanding customer traffic in retail institutions to developing merchandising and designing websites and applications to increase sales and customer loyalty. In essence, this was a period of trial and error in understanding the customer interaction and engagement environment. Once built, it continues to evolve, adapt and improve. In oprisk management, these successes can be emulated by collecting wide-ranging data through systems, applications and processes – and through human interactions – then derive meaningful patterns and behaviours in line with the unique risk challenges of individual organisations and lines of business.

 

New data science applications 

Some may say the current data environment is too vast and expansive to effectively monitor and evaluate. Nevertheless, with new big data science techniques, institutions can now build these capabilities with increasing ease and less investment. The challenge typically lies in scoping what type and range of data will be relevant to obtain the desired model results. This is where leveraging business, as well as the experience of the oprisk manager, will continue to be important. 

Figure 1 is an illustrative data architecture that highlights legacy Basel Committee components and a broader set of data sources required for predictive analysis. Broadly, this architecture includes:

1. Data sources, including systems interfaces, messaging and data flows for bringing together disparate data.

2. Quantification calculators – the models that combine internal and external loss data to produce loss estimates, for example, current capital quantification and/or Comprehensive Capital Analysis and Review of operational stress capital.

3. Core predictive analytics to identify patterns, correlations and causation that are otherwise difficult to spot.

4. Reporting capabilities – the mechanism for communicating current and potential oprisk exposures to senior management and the business line units that manage oprisk on a daily basis, and integration back into traditional oprisk management processes.

 

Multiple vendors serve the predictive analytics market – established and nascent emerging companies. While the solutions offered by various companies have points of differentiation, most predictive analytics solutions offer some core features and capabilities, including support for data preparation and selection, insight generation and visualisation.

Many vendors also offer predictive modelling capabilities that use data mining and probability to forecast outcomes. Each model is made up of multiple predictors – variables likely to influence future results. Once data is collected for relevant predictors, a statistical model is formulated. The model may employ simple linear equations or be a more complex neural network that is mapped through sophisticated software. As additional data becomes available, the statistical analysis model is revalidated or revised. Many vendors are also starting to offer machine learning capabilities to help with the process of identifying the most appropriate – in effect, the strongest – predictive model for a given dataset.

Many vendors also offer embedded predictive analytics capabilities that can be used in the context of business processes. Embedded analytics can help organisations gain the visibility required to understand current and historical results, as well as the causal factors influencing them. Embedded predictive analytics also enable organisations to predict system health and trigger alerts or to recommend corrective actions, which can help ensure systems are performing as intended.

Organisations have also begun the journey to evolve their oprisk architectures. The data components and infrastructure that support oprisk are beginning to shift to include a broader definition of the relevant data elements, and predictive analytics and modelling. As oprisk management continues to mature, its future state is likely to look very similar to what has been described in this article.

About Deloitte

Deloitte helps organisations effectively navigate business risks and opportunities – from strategic, reputation and financial risks to operational, cyber and regulatory risks – to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation and lead in their industries.

The author

Nitish Idnani leads Deloitte Risk and Financial Advisory’s operational risk management services team. He can be contacted via email.

As used in this document, “Deloitte” means Deloitte & Touche LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting.

This article contains general information only and Deloitte is not, by means of this feature, rendering accounting, business, financial, investment, legal, tax or other professional advice or services. This article is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional adviser. Deloitte shall not be held responsible for any loss sustained by any person who relies on this article.

Fed preps draft white paper on cyber risk

By Steve Marlin | News | 5 April 2019

US regulator seeks industry input for initiative on classifying and modelling threats

The US Federal Reserve has launched a project to find a common way of classifying and modelling cyber risk, amid continued fears over banks’ collective readiness to meet the existential threat it poses to the financial system. 

The initiative, which was showcased by representatives from the Fed board at a workshop hosted by the Federal Reserve Bank of Richmond on March 28, seeks to synthesise ideas from risk managers, academics and policy-makers, before disseminating them to the broader market.

The move comes as policy-makers focus on how to resolve the disconnect that exists among financial institutions on what constitutes cyber risk, as well as the need to distinguish between the technological aspects of cyber risk and its impact on the business. Although the Financial Stability Board (FSB) has issued a lexicon for cyber risk, there is no consensus by financial institutions on terminology, which stands in the way of effectively measuring and managing risk, policy-makers point out.

“The lexicon was published last year, but not all banks define cyber risk the same way,” says Filippo Curti, an economist at the Richmond Fed. “If we want to measure cyber risk, we first need to agree on what cyber risk is.”

The presentations at the workshop were selected from a large number of proposals submitted to the Richmond Fed. The ideas will be distilled and developed into a white paper, to be issued in draft form by the end of the summer. Another workshop will be held later this year, when the final paper will be published.

This initial phase of the project will focus on coming up with a classification scheme for cyber. The actual measurement of cyber risk – a notoriously difficult task, due to the ever-morphing nature of the threat and the non-linear relationship between an organisation’s controls and its risk exposure – will be the focus of a second phase once the first phase is completed.

The Fed has acknowledged the need for the official sector to help banks improve their resilience to cyber threats. In a 2018 speech, Fed vice-chairman Randal Quarles said the central bank was “committed to strategies that will result in measurable enhancements to the cyber resiliency of the financial sector”.

Losses due to fraud or business disruption can be triggered by a cyber attack, or by different channels. It may not be necessary to completely destroy the Basel definitions to create a common taxonomy

Filippo Curti, Federal Reserve Bank of Richmond

The Fed is currently participating in an FSB project to develop effective practices relating to a financial institution’s response to and recovery from a cyber attack, on which a progress report will be published by mid-2019. It’s also considering adopting the CERT Resilience Management Model (CERT-RMM) framework developed by Carnegie Mellon University as a standard for operational resilience.

The Fed, along with other US regulators, proposed in 2017 requiring companies to return to operations within two hours of a debilitating cyber attack. The proposal was criticised by the industry as unrealistic, and never moved into the rulemaking phase.

Industry efforts

In the meantime, banks and insurers are increasingly looking to collaborate and pool their resources when it comes to cyber risk management. ORX, which collects and distributes operational loss event data collected from banks, says it is working with a group of about 30 financial institutions to enhance cyber risk management. The project, also showcased at the Richmond Fed workshop, centres on sharing of loss information, best practice and a common taxonomy.

The body’s work is complicated by the fact that some firms treat cyber as a separate risk category in its own right, while others treat it as a subset of other operational risk categories – while still others don’t treat it as an op risk at all.

Richmond Fed, Virginia

“Over the last few months, we’ve been working with members on improving the identification, classification and management of cyber risk,” said Steve Bishop, ORX head of risk information, from the sidelines at the Richmond Fed event.

Referring to the language barrier between risk managers and IT, Bishop noted: “Operational risk’s historic focus on financial loss as the main trigger for op risk events has led to a shortage of data to help organisations understand and benchmark their cyber risk exposure. Many cyber events often don’t have significant direct profit-and-loss impact, but do cause significant business disruption, reputational damage and clean-up costs. The work we are doing is aiming to address this imbalance.”

Still, there was no doubting the scale of the problem, he added. According to ORX News, publicly reported losses across the financial services sector attributable to cyber-related data breaches and instances of fraud and business disruption in 2018 totalled $935 million. Many of these attacks, such as those involving the Swift payments network, involved taking advantage of control weaknesses at smaller banks.

A large number of presentations at the workshop dealt with the question of whether the existing Basel Committee on Banking Supervision operational risk taxonomy, which dates back to 2001, needed to be revised or rewritten afresh in order to accommodate cyber risk.

One US bank said it was scrapping its use of the Basel taxonomy, and had instead created its own op risk classification scheme with eight categories: operations; compliance; data management; models; information security; business continuity; third-party; and technology.

A risk management executive at another global bank said it was treating cyber not as a separate operational risk, but “as a pathway to the Basel categories”, such as internal and external fraud. The bank is linking its business processes and controls to each of the Basel categories, then working with IT and risk teams to identify the impacts of various cyber attacks, such as nation state attackers, organised crime and hackers.

Fair assessments provide a lot of trueness, but lack precision. As you start to source inputs, the analysts are left playing guessing games

Cyber risk manager at a large bank

The advantage of placing the focus on business processes and controls is that they tend to be stable over time, whereas cyber threats and attackers are constantly changing, proponents argue.

“Among the views presented, the one that I currently find more appealing is the one that perceives cyber risk as a channel through which operational risk manifests itself,” says the Richmond Fed’s Curti. “Losses due to fraud or business disruption can be triggered by a cyber attack, or by different channels. It may not be necessary to completely destroy the Basel definitions to create a common taxonomy.”

While the primary focus of the workshop was on identification and classification of cyber risk, there was some discussion of measurement and modelling, which has been problematic for banks. In particular, one large bank described its experience with Fair (Factor Analysis of Information Risk), a widely used cyber risk modelling framework.

Filippo Curti, Richmond Fed

Fair seeks to provide a straightforward map of risk factors and their interrelationships. The approach’s outputs can then be used to inform a quantitative analysis, such as Monte Carlo simulations or a sensitivities-based analysis.

The large bank settled on Fair in 2017 as providing the best approach for modelling cyber risk. In using it, the bank found that, while the model’s overall predictions had a high degree of ‘trueness’ – that is, they were close to the actual losses that could be expected – they lacked precision, meaning they would vary widely from one simulation to the next. This was due to analysts having to make guesses on the inputs, such as frequency of loss events, every time they ran the model.

“Fair assessments provide a lot of trueness, but lack precision,” said a cyber risk manager at the bank. “As you start to source inputs, the analysts are left playing guessing games.”

Rather than having analysts input variables on an ad hoc basis, the bank says it created a predefined library of inputs, so that instead of guessing the likelihood of a social engineering attack, for example, analysts can search the library for a value that’s been agreed on beforehand and plug it into the model. Precision has improved markedly as a result, the manager said.

Curti notes that models will be the focus of the next phase of the Fed’s research effort, after the definitions phase has been completed.

He adds that it would be unwise for authorities to mandate the use of particular models or approaches, however: “There could be standardised definitions, identification, maybe even standardised scenarios. But I don’t think there’s any will to actively impose a particular model because of the potential systemic risk. Every bank should be able to use the model that best fits their own business.”

Editing by Tom Osborn

Op risk past is prologue for UK banks

By Alessandro Aimone | Opinion | 5 April 2019

UK banks will not be allowed to forget past misdeeds

“The wheels of justice turn slowly, but grind exceedingly fine” goes the ancient proverb. Risk managers at the top five UK banks had reason to recall this axiom last year, as they pondered the income-sapping effects of an aggregate £6.5 billion of legal charges – most of them relating to misdeeds committed during the financial crisis.   

The US Department of Justice (DoJ) has levied hefty fines on Barclays, HSBC and RBS for mis-selling residential mortgage-backed securities in the run-up to 2008. The high street banks have also shelled out millions of pounds in compensation to customers that were mis-sold payment protection insurance, a product range discontinued over a decade ago.

Legacy misconduct issues place banks in a double-bind. Once the wrongdoing is uncovered, banks must put aside legal provisions to cover their expected fines. This cash is locked up for an uncertain amount of time, while regulators ponder the penalty, rather than being put to work elsewhere in the business.  

And when the fines are handed out, they show up in the banks’ operational risk capital requirements. Past op risk losses are a key input in the advanced measurement approach (AMA) used by most large banks to calculate their risk charges. Once a fine is incurred, it sits in a bank’s loss history, influencing op risk capital requirements for years after, even when the process, system and governance failures that typically caused such losses in the first place have been fixed.

Once a fine is incurred, it sits in a bank’s loss history, influencing op risk capital requirements for years after, even when the process, system and governance failures that typically caused such losses in the first place have been fixed

This feature of the op risk framework is not going to go away once the final Basel III package comes into effect. The reforms ban the use of AMA models and replace them with a revised standardised approach (SA). One of its main features will be a 10-year lookback period for historical losses. Crucially, it’s the date on which an op risk loss or legal fine is incurred that matters in this lookback period, not the date on which the misconduct that led to the fine occurred.

This means the huge DoJ fines shouldered by HSBC, RBS and Barclays last year will factor into their op risk capital calculations until 2028 – two decades after the mis-selling scandals that triggered them took place.

The shift to the revised SA is expected to cause European banks’ op risk capital to soar, partly because this is a cruder measure of op risk than the AMA. In anticipation of the new regime, which starts to come into effect from 2022, some banks have already started abandoning their models.

Barclays started setting its op risk capital according to the current standardised approach last year. Other UK banks may follow this year in order to frontload the capital shock. The one thing they won’t be able to do, however, is escape the past.  

Lower credit risk shrinks UK banks’ RWAs

By Alessandro Aimone | Data | 29 March 2019

Risk-weighted assets across UK banks fell £78 billion ($102 billion) in the last quarter of 2018, driven down by lower credit and counterparty risk (CCR).

Figures from the Bank of England show total RWAs for the UK banking sector amounted to £2.83 trillion at end-December, down 3% from £2.91 trillion the previous quarter, and £2.89 trillion a year ago.

The bulk of the aggregate reduction came off the back of RWAs related to credit and counterparty risk, which dropped 3%, to £2.04 trillion over the fourth quarter of 2018.

Quarter-on-quarter, market RWAs fell 1.3% to £367 billion, credit valuation adjustment  RWAs dropped 3% to £93 billion, and other RWAs fell 34% to £21 billion. Year-on-year, market, CVA and other RWAs shrunk by 3%, 18% and 2%, respectively.

In contrast, operational RWAs stayed flat on the previous quarter at £308 billion, and grew 2% on the year-ago quarter.

The drop-off of CCR RWAs accounted for 56% of the aggregate RWA reduction from 2014–18. Those classified as ‘other’ accounted for 28%, CVA for 17% and market risk for 5%. Op RWAs offset these reductions slightly, increasing total RWAs 6%.

The BoE reported that UK banks continue to be well-capitalised, with the sector’s total capital ratio increasing 90 basis points to 21.4% over the fourth quarter.

What is it?

The Bank of England publishes quarterly statistical releases on the capital levels and RWAs of the UK banking sector.   

RWAs are used to determine the minimum amount of regulatory capital that must be held by banks. Each banking asset is assessed on its risks: the riskier the asset, the higher the RWA and the greater the amount of regulatory capital that must be put aside.

Why it matters

The BoE’s figures confirm our analysis of the five largest UK banks a few weeks back, in which we highlighted how lenders managed to reduce their aggregate RWAs through asset reductions, model and risk calculation changes, and favourable foreign exchange movements.

Looking at the UK sector as a whole, the shrinkage of CCR RWAs accelerated in the final quarter of 2018, dropping to their lowest level since the BoE’s data series began. It’s likely the same factors that cut these RWAs at the big five were also behind the overall sector-wide drop-off – asset sales, an improvement in counterparty creditworthiness, and tweaks to internal models.

Market RWAs fell for the third consecutive quarter across UK banks, to their lowest point since end-2016. Last time around, we predicted these assets would increase following market turmoil at the end of the year. We got it wrong, although the rate of decline was less sharp than at any point in 2018. Perhaps this reflects banks reining their exposures back as markets turned.  

Op RWAs were flat on the previous quarter, but higher on the year. We recently looked into this and bet that legal charges, which feed into regulatory capital calculations, had something to do with the year-on-year increase.

CVA RWAs fell again in the final three months of the year after a small uptick in the third quarter. When we last looked at CVA capital charges, we explained that such falls can have various drivers: drops in total derivatives exposures, improvements in the creditworthiness of derivatives counterparties and changes to the capital calculation methodology – not to mention hedging activity.

Get in touch

Risk Quantum is launching a daily newsletter soon. Sign up to receive the latest data insights.

Did we miss anything in our analysis of the BoE figures? You can drop us a line at alessandro.aimone@risk.net, send a tweet to @aimoneale, or get in touch on LinkedIn.

Keep up with the Risk Quantum team by checking @RiskQuantum for the latest updates.

Tell me more

Model expansion cuts Barclays' counterparty risk by 24%

Legal charges topped £6 billion at UK banks in 2018

Top UK banks cut CVA capital by £190 million

UK banks find various ways to de-risk

View all regulator stories