ABA scenario analysis project could aid CCAR comparability

By Tom Osborn | News | 25 June 2019

Scheme to agree on common risk drivers could help Fed benchmark risk exposures, says JP op risk expert

A project led by a group of banks to standardise the way in which they approach scenario analysis could make it easier for US regulators to compare the level of operational risk exposure each firm faces, and ultimately the level of capital each should hold, senior industry figures have suggested.

The venture, spearheaded by the American Bankers Association (ABA), seeks to identify the key drivers of material operational risks facing firms across seven scenarios, including a rogue trading incident, a cyber attack on a critical application and the mis-selling of products to retail investors. It includes a number of banks subject to the US Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR) programme, JP Morgan among them.

Scenario generation techniques allow banks seeking a forward-looking gauge of the risks they face to overcome a basic problem: a lack of readily available data. Instead, they use a family of quantitative approaches to determine their risk exposure for a particular factor – the likely degree of loss from a cyber attack, for instance – by working out what could go wrong, and how bad the consequences could be. The outputs of the analysis can then be used to inform a capital calculation, allowing the firm to put a dollar value on its risk exposure.

Under CCAR, banks are required to estimate expected losses under adverse economic conditions, which are based on scenarios the Fed itself sets, as well as losses stemming from significant yet plausible events that are idiosyncratic to the risk they face, which they set themselves according to their business mix. But at present, each bank’s approach to scenario generation is also bespoke – making it unfair to compare the numbers their analysis produces, even for the same risk factors, experts argue. 

“My main problem with scenario analysis under CCAR is [that] the objective of the exercise, at least from a regulatory standpoint, is the comparability of the numbers,” said Evan Sekeris, a partner in the financial services, finance, risk and digital practice at Oliver Wyman, who was speaking during a panel debate at OpRisk North America on June 18.

Evan Sekeris, Oliver Wyman

“With the push to have different institutions generate scenarios using different means and at different levels, we end up with numbers that I doubt are comparable, because what is stressful in the eyes of one bank might not be in the eyes of another. So I think creating a standard for the scenarios, their drivers, will eventually help with the standardisation of the results. If it allows us to get a sense of comparability in the CCAR scenarios, that would be a huge win.”  

Speaking on the same panel, Nedim Baruh, head of operational risk measurement and analytics at JP Morgan, said he was “a big fan” of the ABA’s project, adding that the element of comparability it could ultimately allow should be of appeal to regulators.

Ironically, scenario analysis was seen as partly culpable for the lack of comparability between banks using the advanced measurement approach to set operational risk capital requirements, which in large part led to its phase-out under Basel III. Whereas US regulators insisted banks choosing to use the method should follow a strict loss distribution approach, European banks have generally been permitted to make greater use of scenario generation, the outputs of which, with a regulator’s approval, could be used to demonstrate a lower incidence of potential loss for a given risk factor, and hence lower capital requirements.

Nonetheless, argued Baruh, so far as CCAR goes, if the US Federal Reserve could get comfortable that banks were at least thinking about the way scenarios were constructed in a co-ordinated manner, the watchdog could consider adopting the industry’s approach to setting common scenarios for certain risk factors, and ultimately using them to help determine capital requirements.

“If we all standardise these models, and if we standardise the drivers, maybe the Fed can use those and project what they define to be operational stress. Right now, CCAR defines economic stress, but not operational stress – it relies on us to define our own stresses, based on our own vulnerabilities. Great, I think there’s some value in that – but it sure doesn’t help comparability. But if we all have similar models with the same drivers, [the Fed] can project those drivers in a stressful operational risk environment – that’s what the definition of stress is. That’s very helpful for us, but also for them, in the context of CCAR. [Could it be used to set] capital? Over time, probably, as well – but way down the road,” said Baruh.

The aim of the ABA’s project is to produce a distribution of possible loss impacts for each risk factor it is considering, an approach that banks could use to help inform capital planning – although its initial aim is focused on helping firms involved make better business decisions, said Jane Yao, senior vice-president of the benchmarking and survey group at the ABA, speaking during the debate.

If we all standardise these models, and if we standardise the drivers, maybe the Fed can use those and project what they define to be operational stress

Nedim Baruh, JP Morgan

“You have to look at the distribution, where the loss potential is – versus ‘if we put a billion dollars here, will that reduce my loss to a more acceptable level?’ That’s the business decision right there. We’re not trying to use scenarios for capital numbers. You can use this for business decisions, going into new markets. You can play the different scenarios: ‘Is this worth getting into? What would be my exposure?’ I think that’s the beauty of going to a structured approach: you really understand what is driving the losses, [what are] the risks.”

By standardising data on the drivers of risk behind exposures, used as inputs to inform scenarios, said Yao, the project ultimately hopes to help banks benchmark their scenario outputs against one another, to get a rough idea for the first time how material their own exposure to the risk of loss from rogue trading in their equities business is, for instance, compared with their peers.

It would, said Baruh, be “a big win” if the project allowed banks to benchmark risk exposure among themselves. Benchmarking against one’s peers was also a tremendous opportunity, he suggested – “because how many times in scenario [generation] have you said ‘how likely is that [outcome]?’, because it’s hard to backtest? But if you can benchmark, you get to see if you’re an outlier – that’s important”.

For that reason, regulators should also be keen, argued Baruh: “The idea of having some kind of standard model really should be appealing to the industry – not just for the banks, who would benefit from sharing information, and, spending less time [modelling] from a resource and cost-effectiveness point of view, but also the regulators. Because when regulators start talking about standardised models – which is the reason they went to the SMA [standardised measurement approach] – well, here’s a really important way of measuring and understanding operational risk.”

Risk Technology Awards 2019: The Analytics Boutique

By Alex Hurrell | Advertisement | 21 June 2019

Operational risk modelling vendor of the year

Rafael Cavestany Sanz-Briz, TAB

The Analytics Boutique (TAB) takes the view that analytics teams, rather than designing and developing code, should be focused on value-added tasks, assisted by user-friendly tools that incorporate industry standards and best practices, with full model governance, integrity of data and mechanised report generation. 

TAB’s OpCapital Analytics for operational risk modelling helps institutions gain a deep understanding of op risk exposures and potential losses. The solution enables the modelling and integration of the four key data elements – internal loss data, external loss data, scenario analysis and business environment internal control factors. This allows estimation of economic and regulatory capital requirements and loss forecasting under stress scenarios. 

Structured Scenario Analysis is a web-based solution for planning, developing and modelling risk scenarios under structured expert judgement methods. It enables users to implement robust and efficient scenario analysis processes to increase the quality of loss estimates, mitigating cognitive biases, linking risk measurement and mitigation, and producing robust potential loss metrics and stable capital estimates.

A regulatory validation reporting tool automatically generates reports that contain all the required parameters, modelling options and inputs for an external analyst to replicate a model. The information is derived from the audit trail and is therefore consistent with it. 

TAB’s software goes beyond modelling and, by monetising the cost of risk, produces critical metrics that are understandable by senior management and can be used in the daily risk management of the institution. The metrics include capital requirements, net present value of mitigation plan investments and insurance policies, and monetary savings given mitigation plans.

 To avoid ‘black box’ models and ensure users have full control of the modelling process, the company publishes its methodologies and opens its source code to clients.  

The company recently added ModelRisk View for managing model risk through the lifecycle of a model, from development to validation, monitoring, internal audit, regulatory investigations and corrective action plans. The module creates a repository of documentation and will set up alerts for redevelopment, validation or other relevant actions. 

Rafael Cavestany Sanz-Briz, founder of and chief executive officer at TAB, says: “TAB provides a complete suite of fully integrated op risk analytics covering operational value-at-risk, scenario analysis, loss forecasting and prediction models, with industry standards and best practices for Solvency II, Pillar 2 and own risk and solvency assessment. Our solutions are user-friendly, eliminating manual processes and complex coding, and have strong model governance and reporting, with full audit trail and traceability. Special emphasis has been given to model validation and regulatory reporting functionalities as we responded to our clients’ needs for model reporting approval by several regulators. Finally, our tools maximise efficiency by eliminating all low value-added tasks, such as scenario aggregation, reporting and data management.”

Judges’ comments

TAB offers a solid, end-to-end solution. It is more than just modelling, with analytics and reporting and model risk management.”

“Comprehensive, with a useful model risk component added recently.”

“The company’s open systems policy is to be applauded.” 

“A comprehensive solution that integrates with existing financial planning tools. Has a flexible and open approach.”

Risk Technology Awards 2019: The Technancial Company

By Alex Hurrell | Advertisement | 20 June 2019

Risk dashboard software of the year

Mirko Marcadella, The Technancial Company

JANUS Risk Manager is a multi-asset, multicurrency and multi-market real-time risk management and order validation system for exchange-traded financial instruments. Key to its success is its flexible and highly scalable dashboard for real-time data monitoring, updates and rule checks. The modern JANUS user interface allows users to easily configure their dashboards and prioritise content without the need for IT support or programming skills.

JANUS can take in thousands of client account limits, as well as market data and trades. The margin engine evaluates portfolio margins and profit and loss continuously, checking available spending power, as well as monitoring trading abuse and tracking a multitude of key indicators of trading system performance. The dashboard dynamically sorts or prioritises accounts in alert status by various criteria, enabling risk managers to see developing risks. Criteria can be regulatory, user-defined or based on client account characteristics. 

For example, positions that did not appear risky in the previous end-of-day stress test assessment would be highlighted if they began to display rapidly growing short gamma or shrinking margin excess, or there was a concentration of positions in a certain market. Risk managers will see the previously hidden risks as top-line items coupled with alerts that have been sent because of breaches at levels below cut-off values.

Alerts can be directly queried with single-click actions to drill down to the alert detail, and then to the components that caused the alert. Components can include margin compared with collateral increase, positions in a particular instrument exceeding a specified percentage of the exchange average daily volume, or margin concentration. Another click will show detail on exposure at multiple clearing houses. Additionally, a major alert that could put an account into the critical zone can have an automated kill switch.

In 2018, the firm introduced JANUS Behavioural Analytics, a module that records all data related to checks in a holistic manner. This can include initial margin, variation margin, collateral and market data. The module will link a breach to the complete history leading up to it – the events, checks and activity – and display it in a single selectable chart view. This enables users to discover all actions leading to a breach, near-breach values, their frequency, actions that moved values towards breach conditions and, finally, the breach itself. 

Mirko Marcadella, managing director and co‑founder of The Technancial Company, says: “Designing JANUS as a gatekeeper, we didn’t predict its use would go beyond this as much as it has, with clients adding to the rich feature set, making it more than just a safety tool. Users saw how much could be learned from risk rule alerts, and we realised there was a lot more information in data captured even before an alert happened. JANUS Dashboard provides modern means of access to JANUS Risk Manager real-time measurements and alerts, as well as JANUS Behavioural Analytics historical data and analytics. It represents a new disruptive paradigm in limit setting and monitoring, and we are delighted to have earned this recognition for our team’s hard work.” 

Judges comments

“Technancial’s product is adaptable to a wide range of organisations.” 

“A solid offering that is commercially successful and adapted to current needs of trading and broking surveillance, compliance, clearing and risk management.”

“An innovative solution, with the ability for users to customise it a big plus. It targets a wide variety of organisations and users, including risk managers in the front office, as well as operational risk managers.”

Keeping the robots honest

By Alexander Campbell | News | 19 June 2019

Human overseers are in short supply in an arena where losses can be crippling in minutes

The robots need minding. But the human monitors needed to make algorithmic trading safe are in short supply.  

“There’s no script for algo trading risk management,” said Jason Conn, head of electronic trading operational risk management at Citi, at the OpRisk North America conference on June 18. “We know some of our peers are doing it differently. Some have engaged model risk teams and some have not.”

Conn said the lack of historical data and personnel was holding back the effort. While banks employ large squads of compliance and surveillance personnel to watch over human traders, responsibility for trading algos rests with thinly staffed model risk teams.

Even the definition of model risk needed reassessing, he added – in many cases, algorithmic strategies are not covered by model risk management plans, although this is being reassessed.

Speed presents another twist for risk managers. Trading algorithms need to be validated much more quickly than other models.

“You can’t do six-month or even six-week validation of algorithms,” Conn pointed out. Regulators are unlikely to accept the usual methods that might detect a breach a day or two after it happened.

“The regulators say: ‘These trades are happening in microseconds – T+2 is irrelevant,’” Conn said in New York, using industry jargon for two days after a trade. “We have new standards for all these policies – you need to put a different lens on algorithmic trading risks.”

The use of machine learning and artificial intelligence in trading algorithms presents other challenges. It can be difficult to gauge how these models will behave in a given scenario, or even what they have done after the fact, which makes it hard to set risk controls.

Speaking with Risk.net earlier this year, Darrel Yawitch, chief risk officer at Man Group, said machine learning algorithms required many of the same controls and monitoring as human traders, including regular interrogation by risk managers.   

One panellist at the conference compared the difficulty of supervising trading algorithms to that of following Isaac Asimov’s Three Laws of Robotics.

“The first law is ‘Do no harm to humans’ – which means mitigate risks and maintain market function,” said Emil Matsakh, former chief data and analytics officer at the Commonwealth Bank of Australia in New York. “The second is ‘Obey orders’ – which means achieve a good control environment. And the third is ‘Protect yourself’ – which means building an appropriate risk management framework for conduct risk.”

Some regulators are getting involved. In the UK, authorities require ownership of the development, testing and oversight of each trading algorithm.  

The turning point on algo risk came in 2012, with the catastrophic losses at Knight Capital, said Conn. The firm lost $461 million in 45 minutes when its new software malfunctioned.

“It changed the face of electronic trading – it created my job, which was to ensure that it didn’t happen at Citi,” said Conn. “We all knew we couldn’t operate without higher standards for e-trading. The tail risk associated with algorithmic trading could be much more significant than your traditional trading environment.”

A wider range of skills is required for algorithm risk management.

“The challenge is the human capital, the human element,” said Conn. That involved finding people with “expertise across the product set and technology and risk, [and the] very different asset classes that are in scope; equities, rates, foreign exchange, futures, credit, commodities and so forth”, he added.

And, culturally, there are rifts between those various groups of people, said Matsakh: “There are two opposed elements – model risk management and legacy trading – and you have to bring those two together.”

As Conn saw it, the model risk manager’s job would be “bringing siloed experts together and hoping it rubs off on each other”.

Correction, June 21: This article was updated to give Emil Matsakh’s correct job description

Ex-Huawei tech security chief on steeling UOB’s cyber defences

By Aileen Chuang | Profile | 19 June 2019

Singaporean bank overhauls penetration testing and scenario analysis, with Tobias Gondrom leading the effort

Tobias Gondrom is an idealist. His decision to take a job at Singapore’s UOB after a career in tech and telecom firms was prompted, in part, by an advertisement he had seen on a Cathay Pacific flight from London to Hong Kong a few years previously.

The commercial featured a young girl receiving books from a donor. Before long, her father decided that one book was too valuable to accept and insisted on returning it to the donor. The sentiment struck a chord with Gondrom – particularly in the aftermath of the financial crisis, when banks were accused of acting with excessive self-interest.

“It touched me so much I had tears in my eyes,” he says. “I went off the plane and I posted on my Facebook which I normally never do and said something like: ‘The virtue of self-motivated honesty: wouldn’t it be nice for all banks to be like this with this kind of value?’”

Since joining UOB as chief information security officer almost a year ago, Gondrom has focused on protecting the lender from cyber threats. He has enhanced hacking penetration testing and scenario analysis at a time when data loss through cyber attacks and the potential damage to reputation assume significance.

Risk.net’s annual ranking of the biggest op risks for 2019, based on a survey of operational risk practitioners across the globe, put data compromise as the top threat for the first time. The role of the chief information security officer is at the sharp end of banks’ efforts to protect client data and guard against IT disruption and financial loss.

“I spent all my life in IT,” Gondrom says. “Being able to understand the technology allows me to combine the strategic view with the detailed technical questions, to support my team members and set them up for success.”

Gondrom was the chief technology officer for security at Huawei till June last year. He declined to comment on his role at Huawei or the espionage allegation engulfing the Chinese firm now, citing clauses in his employment contract with UOB that prevents him from talking about his previous employers. Instead, he chooses to put the spotlight on how he is leaning on lessons learnt in his almost two decade long career in technology to augment UOB’s defences against cyber threats.

One of the first changes he put together was to strengthen the ethical hacking team, which recreates the tactics of cyber criminals, to boost defensive measures and improve response time.

A typical way of testing a bank’s systems is to deploy a red and blue team. The red team plays the part of hackers attempting to breach the bank’s security systems. The blue team detects and defends. At UOB, the two teams test for vulnerabilities, come up with strategies and report to senior management.

Under Gondrom’s watch, the red team has also started penetration testing, where ethical hackers take their big bag of “keys” that include tools and techniques and attempt to open all the possible locks and layers of security the bank has built.

Typically, pen testing is short term in nature whereas red/blue campaigns take place over a longer period of time. At UOB, this work was previously done by separate teams, but Gondrom has made changes.

“We combined the penetration testing work with the red team because we found there to be enhanced synergy in that they share common tools and attack techniques,” he says. “Penetration testing is useful as you get a good idea of what are our potential weaknesses. This in turn makes the work of the red team more effective.”

Pen testing represents a steady and continuous flow of work while red team engagements are more flexible and open, so combining the activities allows both units to balance workloads across different peak times, maximising productivity, Gondrom adds. The structure also makes it easier for staff to move between functions, providing a career development path for individuals.

UOB is in the process of forming a purple team, which is a mix of red and blue teams, to increase the quality and effectiveness of the activities and learning cycles.

Brought to life

Another widely used approach to manage cyber risk is scenario-based analysis, which Gondrom recognises for having two strengths: making threats specific and tangible.

“When I go to the board or speak at a conference, it is often helpful to show a number of scenarios because it brings these cases to life,” he says. “If you speak in a scenario, it’s a story. Stories are powerful. With a story, you can make it very tangible for the board members or for your senior stakeholders – why is this important, why should you do this, why should you care.”

UOB is not the only bank in Singapore to develop scenario analysis as a cyber risk tool: Standard Chartered is using the technique to model losses from cyber breaches. One caveat to scenario analysis is that an institution may need thousands of scenarios to perform scientific risk management of cyber security, which is a laborious task. Thus, identifying a number of major scenarios is important to achieve effective results, Gondrom says. He identifies examples such as the SingHealth data breach of last year, the NotPetya ransomware case of 2017, and the Bangladesh Bank heist in 2016.

Gondrom also supports efforts by Asia-Pacific regulators to push banks to share more intelligence on the nature of the cyber threats they face. Some lenders, however, fear penalties if they highlight perceived weaknesses in their defences, and are unwilling to risk breaching local data protection laws.

“On a personal level, I’m a strong fan of sharing intelligence,” Gondrom says, “but it is important to find the balance between sharing and oversharing on an open network. You don’t want to show all your cards to your adversaries. If a circle of people is getting too big, you have a higher risk of having someone infiltrate it. That’s why closed trusted networks are probably the most common ones at the moment for sharing intelligence.”

An example of a closed network is ORX, the Operational Riskdata Exchange, a consortium of financial institutions which aggregates loss data from its members, anonymises it, and publishes the data back to member firms.

Gondrom, a German native, holds a diploma in physics from the Technical University of Munich and a masters in general management from London Business School. He moved into the technology sector in 1999, working for a Canadian software company, OpenText, where he later became head of the security team.

He then spent seven years at the now-defunct boutique risk consultancy Thames Stanley in Hong Kong as head of information security and risk before joining the Shenzhen-based telecoms firm Huawei in 2015. Until June last year, he led the development and improvement of security technologies such as software defined networking, internet of things, wireless and security competitiveness across Huawei’s product lines and business units, according to his LinkedIn profile.

With the previous financial crisis fading into history, bank executives and regulators including US Federal Reserve chair, Jerome Powell, believe cyber attacks could be among the triggers for the next crisis.

As such, the role played by Gondrom and his team is set to be as important as the focus on lending standards, capital levels, conduct and culture.

Gondrom plans to grow his team to enhance expertise in security risk management, governance, automation and advanced analytics, he says.

“We spend a lot of effort and focus on choosing the right people and the right team,” he says. “Once we have them, it’s about unleashing their potential.”

Biography – Tobias Gondrom

July 2018–present: Chief information security officer, UOB

2015–2018: Global chief technology officer security, Huawei

2008–2015: Head of information security and risk, Thames Stanley

2005–2007: Head of security team, Open Text Corporation

1999–2004: Senior software architect and security architect, IXOS Software AG

Editing by Alex Krohn

Basel set to update op risk and resilience principles

By Steve Marlin | News | 18 June 2019

Op risk working group to issue core ‘indicators of resilience’ proposal as update to 2011 principles

The Basel Committee on Banking Supervision is working to develop a set of metrics for operational resilience that will serve as gauge for how companies can maintain effective service levels following an IT disruption or other form of outage, a senior policymaker at the body has said.

The work is being conducted in concert with a long-expected update to the Basel Committee’s Principles for the sound management of operational risk (PSMOR) report, which was published in its current form in 2011, setting out policy guidance on core frameworks such as the three lines of defence model. Basel’s objective is to weave a discussion of resilience metrics into that update, and ultimately create a common set of metrics for the industry, said Arthur Lindo, chair of the operational resilience working group, at an event on June 18.

No specific timeline has been set for development of the metrics, added Lindo, who is also a deputy director in the US Federal Reserve’s division of supervision and regulation.

“We are not putting out hard metrics, like a capital measure for operational resiliency. We are going to put out indicators of resiliency. We hope to have a couple more forums where we can have discussions with firms about principles for operational resilience, and use those discussions to tease out what kinds of metrics they find to be most useful,” said Lindo, who was speaking on the sidelines of Risk.net’s OpRisk North America conference on Tuesday, June 18, after giving the keynote address.

The Fed is understood to be preparing its own policy paper on operational resilience – something Lindo did not explicitly mention, but which industry participants are eagerly anticipating.  

The apparent caution with which the Fed and the Basel Committee are proceeding on resilience stands in sharp contrast to the approach taken by the Bank of England, which is preparing to issue a full consultation paper on operational resilience guidelines this autumn, which in turn will build on a discussion paper issued last year in conjunction with the Financial Conduct Authority. The watchdog is then expected to issue formal policy expectations on operational resilience next year.

Although US regulators are in regular consultation with the BoE on operational resilience, they are not in lockstep. “We work hand in glove with the Bank of England – but those are their papers. We talk, and when we talk, we have disagreements,” said Lindo during his keynote address. “Some approaches the Bank of England uses, we part with,” he added, pointing to the UK bank’s approach to stress-testing for one-off tail events.

“We don’t mandate that type of stress-testing. The idea is to give firms [the] ability to determine their own impact tolerances,” added Lindo.

Last month, the BoE’s director of supervisory risk specialists, Nick Strange, revealed that UK supervisors are considering forcing firms to set a specific “tolerance for disruption – in the form of a specific outcome or metric”. For example, in the case of a service outage, “the number of customers affected [or] the maximum allowed time for restoration of a business service” could be the chosen marker.

UK regulatory guidance was widely seen as a response to several high-profile resilience failures among UK banks, including the Royal Bank of Scotland’s 2012 platform outage, and the blocked services in 2018 at TSB.

Another plank of the BoE’s approach to safeguarding the resilience of the UK financial system has been the development of a cyber stress-testing programme. Later this year, it will test firms’ risk tolerance in a hypothetical scenario in which IT systems supporting payments function become unavailable. The BoE has developed a framework for cyber security testing – controlled, bespoke, intelligence-led cyber security tests, or CBEST – that can be used by banks and insurers to test their cyber defences against realistic threat simulations.

Evan Sekeris, Oliver Wyman

Financial firms are still waiting to see what tolerance measures regulators ultimately propose. Speaking on a panel debate after Lindo had spoken, Evan Sekeris, a partner in the financial services practice at management consultancy Oliver Wyman, said an operational resilience framework risked functioning like a gauge that tells a driver how well a car is performing, but can’t predict the likelihood of an accident.

“Ultimately, it’s a qualitative exercise – an organisational exercise in making sure there’s a framework in place to deal with emergencies, which by definition are unique. Are there metrics that I can use to monitor the health of my system? Absolutely. But those metrics don’t help with managing resilience. You need to differentiate between metrics that are there to help you identify issues versus metrics that identify whether you are successful,” he said.

Basel’s operational resilience working group was formed last year following the final passage of Basel III, which removed banks’ freedom to model their own Pillar I capital requirements using risk models. Regulators’ focus has shifted towards improving the ability to rebound from cyber attacks or other disruptions. The working group is seeking to publish its update to the PSMOR, as well as a set of principles for operational resilience, by the first quarter of 2020, with the work on metrics taking place concurrently, said Lindo. 

Business lines must answer for ML biases – OCC’s Dugan

By Noah Zuss | News | 18 June 2019

Banks cannot blame developers or vendors for faulty machine learning models, says regulator

Regulators will hold business units accountable for machine learning (ML) biases that result in unfair business decisions, a top regulator at the Office of the Comptroller of the Currency (OCC) said today (June 18) at the OpRisk North America conference in New York.

ML techniques, a subset of artificial intelligence (AI), are being used by banks to automate and speed up data-intensive processes, such as credit approvals and anti-money laundering checks. But ML models’ reliance on historical data could make them susceptible to biases, which may result in harmful business decisions – for instance, denying loans to everyone in a certain postcode because the data shows a higher default rate among borrowers that lived there in the past.   

Beth Dugan, deputy comptroller for operational risk at the OCC, said first-line businesses were responsible for policing and eliminating such biases: “The business is the one that brings the risk in – they’re the ones that have to ensure whatever product or activity they’re doing is done appropriately within the laws and requirements, and that it doesn’t bake in unfair potential biases.”

She said business units would be answerable to regulators, even if the models in question are developed by external vendors: “We can’t blame the vendor. You still need to understand how it works.”

Understanding and explaining the outputs of ML models has been a challenge for banks. This is especially true of self-learning techniques, where model outputs are used to inform future inputs. This creates a dynamic feedback loop that can make it difficult for model developers to explain how a model will behave in future.

Banks are committing significant resources, hiring academics and AI experts, to develop frameworks to explain their ML model outputs. The stakes are high, as firms begin to realise they might not be able to use their newly minted predictive models if they cannot explain them.

Regulators have urged banks to tread carefully. Last year, US Federal Reserve governor Lael Brainard singled out consumer lending as an area where understanding how ML models work is critical: “It should not be assumed that AI approaches are free of bias simply because they are automated and rely less on direct human intervention.”

Fair lending regulations in the US put the onus on banks to explain why they accept or reject a loan application, which effectively precludes the use of black box models, whose decisions cannot be readily explained.

Businesses using ML techniques must work closely with banks’ risk and compliance teams to mitigate model biases and ensure the proper controls are in place, said Dugan.

“The business obviously needs to work with not only their risk partners, but their legal partners, their compliance partners, to ensure that it is delivered as it should be and that these biases that might result in unfair application or decisions are eliminated or mitigated to the absolute degree that they can,” she said.

Generali expands scope of internal model

By Louie Woodall | Data | 17 June 2019

Over two-thirds of Italian insurance group Generali’s solvency capital requirement (SCR) was calculated using internal models in 2018, its largest share under the Solvency II regime to date.

Prior to the application of diversification effects across risk categories, 68% of the firm’s SCR of €24.7 billion ($27.6 billion) was generated using internal models. In 2017, the share was 61% of €27.2 billion and in 2016, 64% of €28.3 billion.  

Generali’s models were used to calculate 69% of its market risk SCR, up from 52% in 2017 and 54% in 2016. Its total market SCR stood at €10.5 billion in 2018, down 1.1% on the previous year.   

Of the life underwriting risk SCR, 57% was modelled in 2018, compared with 46% in 2017 and 47% in 2016. The life underwriting risk SCR came in at €2.7 billion in 2018, down 5% year on year.

Also, a larger share of the non-life underwriting risk SCR was modelled last year compared with previous years – 68%, compared with 44% and 47% in 2017 and 2016 respectively. The SCR for this risk was €3.1 billion, down 26% year on year. 

In contrast, the modelled share of credit risk SCR barely inched up to 93% in 2018, versus 91% in 2017 and 90% in 2016. The euro amount for this charge was €5.9 billion, down 11% on 2017.

Generali calculates its operational risk purely under the standardised approach. This hit €1.7 billion in 2018, down 10% on 2017.

After taking diversification into account, the firm’s overall SCR was €20.4 billion, down 8% from €22.2 billion in 2017. The ratio of own funds to its SCR was 217% at end-2018 and 207% at end-March.

What is it?

Solvency II obliges insurers to publish an annual Solvency and Financial Condition Report, containing data on their performance, system of governance, risk profile, valuations and capital management.

The regulation also ushered in harmonised Quantitative Reporting Templates to promote uniform disclosures across the European Union. The data in the figures above is derived from Generali’s QRT S.25.02.22 – ‘solvency capital requirement for groups using the standard formula and partial internal model’.

Why it matters

Generali won approval to extend its internal model to its Austrian and Swiss operations in 2018, which changed the group risk profile and allowed it to claim greater diversification benefits than before.

Under the “two-world” approach, Generali does not factor in any diversification benefits between those elements of its SCR calculated using internal models and those generated using the standardised approach, meaning that, prior to 2018, the Austrian and Swiss entities were essentially ring-fenced, and their set of market and underwriting risks could not be used to offset those present in its Italian, French and German operations, among others.  

There remain pockets of risk run by Generali that are capitalised using the standardised approach, suggesting the firm could continue to expand its internal model and reap further SCR savings in the future. The group voluntarily added €407 million to its 2018 SCR to cover planned modelling improvements, suggesting further optimisations could be in the works.

Get in touch

Risk Quantum has launched a daily newsletter. Sign up to receive the latest data insights.

Can Generali eke greater SCR savings by expanding its model scope further, or has it reached the end of the line? Let us know your thoughts by emailing louie.woodall@infopro-digital.com, sending a tweet to @LouieWoodall or messaging on LinkedIn.

You can keep up with the Risk Quantum team by following @RiskQuantum.

Tell me more

Generali’s solvency ratio falls on risk-free rate changes

Allianz’s counterparty risk charge up €102 million in 2018

Axa market risk charge drops almost €3bn in 2018

View all insurer stories

Lessons from a decade of top 10 op risks

By Ariane Chapelle | Opinion | 17 June 2019

Constants and changes in Risk.net’s annual rankings spotlight common gaps in op risk management

Earlier this year, Risk.net published its annual list of the top 10 operational risks. Based on a survey of practitioners around the world and interviews with industry insiders, the ranking aims to highlight the 10 most important operational risks for the financial sector in the year ahead.

Looking at the concerns that have appeared on the list over the past decade, emerging operational risks can be sorted into three categories – the risks we all know, the risks we should know and the risks we don’t know – and each holds a lesson for practitioners.

The first type comprises risks that are flagged from year to year. The persistence of certain risks might mean the sector never learns and keeps ignoring gaps in controls.

The second category consists of truly emerging but often underappreciated risks, such as those arising from climate change and growing income inequality. The category also contains self-inflicted risks: obvious internal vulnerabilities that have not been addressed.    

The last bucket groups together risks that are nigh on impossible to predict – say, an unexpected physical attack or political swing – reinforcing the importance of operational resilience, rather than just risk prevention.

Table A provides an overview of the top 10 risks that have appeared on Risk.net lists since 2010. Starting with the 2016 list, the risks are ranked in order of importance. Each recurring idiosyncratic risk, or group of related risks, is given its own colour. Risks that have been mentioned only once or twice are left white – these are usually based on recent events and so can be described as predicting the past.

Click here to see a larger version of the table

Click on each year for the annual top 10 lists: 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010.

Cyber risk and data security, regulatory fines and outsourcing feature frequently. These are the threats we all know, also appearing on the ORX list of top operational risks as chosen by banks and insurers.

Risk.net’s taxonomy has changed in recent years, but cyber risk in its various guises has still topped the list since 2013. This year it is represented by data compromise (#1), IT disruption (#2) and theft and fraud (#5).

Reputational risk dropped out after 2014, followed by the related social media risk a year later. Business continuity also stopped appearing in the top 10 after 2014. Arguably, both are more impacts than risks, and their absence might simply be due to a clearer distinction made now between risks and impacts.

Outsourcing risk was mentioned in 2011 and re-emerged in 2016, remaining on the list ever since. Organisational change, which refers to the risk of mishaps during any kind of internal transition, appeared in 2016 and has remained in the top 10 since then. It seems that operational risk managers agree with the philosopher Heraclitus that “everything changes and nothing stands still”, but they see it more as a risk than the core of a philosophy. 

If the same risks rank in the top 10 over a decade, it might simply mean they are not being tackled, at least not effectively

Some other risks come and go, depending on the news flow over the previous months. A good example is the threat of terrorism, which was flagged in the 2016 list, following the Charlie Hebdo attacks in January 2015 and the November 2015 violence in the Bataclan concert arena in Paris. Other examples are political risk, which appeared in the ranking in 2012, 2013, 2017 and 2019, and model risk, which featured in the 2015 and 2018 lists.

Other perceived hazards are even more news-driven, such as the risk of epidemic disease mentioned only in 2013, after an outbreak of salmonella in the US and a major epidemic of ebola in West Africa in 2012; index rigging in 2014 after the Libor scandal broke in early 2013; and board overstretch in 2014 after post-financial crisis rules on corporate governance were published in 2013. These topical risks were short-lived and demonstrated that human minds are especially good at predicting the past.

But, as the table shows, there is also a great deal of consistency across the yearly lists. There are two possible explanations for this and one of them is concerning: if the same risks rank in the top 10 over a decade, it might simply mean they are not being tackled, at least not effectively. It seems that talking about an issue, often at length, or mentioning it as a priority or as a main concern gives us the illusion of having prevented it, even before any effective measure is implemented – a cycle that ensures we report the same concerns over and over again.

The more reassuring explanation for the high degree of overlap between Risk.net’s annual lists is that, despite its diversity, operational risk displays a certain stability around its core drivers. These are systems and data, fraud, financial crime and related regulatory sanctions, outsourcing risks, and conduct and culture. Arguably, they give rise, at least in part, to the other risks.

Latent risks and curveballs

Some of the truly emerging risks, which firms should consider but often don’t, are the slow-burning risks that banks have little control over, such as water scarcity, migration and social unrest. These can be assessed using the so-called Pestle analysis – an acronym representing the political, economic, socio-cultural, technological, legal and environmental factors that can affect the risks for a firm or a project.

Other latent risks we should all be aware of are those stemming from internal weaknesses that no-one has the resource, the time or the will to address. A scandal or, if you are lucky, a large near-miss, often act as a wake-up call to strengthen controls and tackle long-overdue issues. Examples include rogue traders at UBS and Societe Generale and money laundering facilitated by HSBC.

And then there are real surprises, unpredictable by definition. For example, neither Brexit nor any other kind of political risk made it into the 2016 Risk.net ranking, which was compiled at the end of 2015 whereas the UK referendum and then the US presidential election took place in 2016. Geopolitical risk duly appeared in the 2017 list.

Since firms cannot foresee, let alone prevent, every risk, they should have in place an adequate operational resilience framework. When disruptions inevitably occur, such a framework will ensure, first, continued service provision at a minimum level; second, return to a normal state; and third, learning from the incident and improving prevention. It is an approach proposed by the Bank of England and the UK’s Financial Conduct Authority in a discussion paper on operational resilience and reaffirmed in a recent BoE speech.

We cannot predict the future, but we can study our environment and adapt accordingly – the most successful firms are those able to adapt not through magical foresight, but through constant observation and change.

Editing by Olesya Dmitracova

Financial firms toil to meet new EU rules on outsourcing

By Costas Mourselas | Features | 10 June 2019

Negotiating right to audit vendors, including cloud providers, seen as toughest requirement

The brief of bank operational risk managers is changing rapidly. Where once they spent their days fretting about internal systems, now they face a galaxy of potential points of failure and back doors for cyber attackers across thousands of third-party service providers. And regulators have made it increasingly clear that banks cannot delegate responsibility without maintaining a strict level of oversight.

In its latest guidelines on outsourcing, the European Banking Authority (EBA) requires banks, payment companies and certain investment firms to do more than ever before to vet their suppliers. Some argue complying will be no easy task and may even threaten the survival of existing arrangements in certain cases.

“These regulations have been brought in because there are significant risks that the industry needs to manage and it is not doing it well enough,” says Guy Warren, chief executive of software vendor ITRS Group. “What the regulator is saying is … ‘You need to decide what you can outsource profitably with the necessary supervision. If it’s going to cost you too much money to outsource, you shouldn’t be doing it.’”

Worries centre on a requirement concerning providers of “critical or important” functions. The guidelines state that financial and payment firms should ensure those vendors grant them full access to their business premises, including systems and data, as well as unrestricted audit rights related to the outsourcing arrangement.

Two sources point out that cloud providers, which service a growing number of banks, are particularly unlikely to give their customers such latitude.

If firms manage to obtain the access and audit rights from their outsourcers, solving one problem, they will acquire another: the bureaucracy and expense of auditing. They could make their life easier by clubbing together with competitors to carry out pooled audits – as Deutsche Börse has recently agreed with Microsoft. Firms could also use third-party certifications or audit reports, although the guidelines say they should not rely solely on these.

In what some see as the most radical aspect of the guidelines, they apply not just to third-party vendors but also to intragroup arrangements, where a subsidiary of a larger financial group outsources service provision to another part of the same group. Such existing contracts are not as robust as those drawn up with external suppliers and will require the biggest overhaul.

For others, the biggest changes lie elsewhere. BNP Paribas, for example, singles out stricter checks of subcontractors and greater focus on concentration risk posed by outsourcing multiple services to the same provider or by outsourcing important functions to one of a few dominant suppliers, meaning the supplier cannot be easily replaced if it fails.

Can I have a look around?

The new rules are part of a broader push by financial regulators around the world to strengthen firms’ operational resilience. The term comes up in the EBA guidelines and refers to the ability of firms and the financial system as a whole to absorb and adapt to shocks, to borrow the Bank of England’s definition.

Later this year, the central bank plans to publish a consultation paper on its proposed new policies in this area. Speaking in May, Nick Strange, the BoE’s director of supervisory risk specialists, said the bank’s approach will be to focus on continuity of important business services in the event of disruption.

On the global level, the Basel Committee on Banking Supervision set up the Operational Resilience Working Group in 2018 in order to contribute to national and international efforts to improve cyber risk management, among other things.

“Since breaches will inevitably occur regardless of the level of protection, the risk management approach … should also address how to respond, recover and learn from any breach,” the committee said in the document announcing the creation of the working group.

“This kind of contingency and continuity planning implies that a firm’s systems be mapped according to their criticality, and that a risk appetite be defined for the firm’s assets and businesses against relevant metrics. Such an approach also applies to operational disruptions from causes other than a cyber attack – for example, natural catastrophe or failure of a critical third-party service provider.”

The [main] cloud providers are very clear they’re not going to allow every bank to come in and walk around their data centres

Charles Forde, UBS

The distinction between critical and less important functions is repeated in the EBA guidelines, which update European banking rules on outsourcing issued in 2006 and incorporate the EBA’s December 2017 recommendations on outsourcing to cloud providers. An operational function is deemed critical or important if its failure would “materially impair” a firm’s compliance with its obligations, or its services or financial performance.

If such a function is outsourced, the guidelines require financial firms to negotiate the extensive rights of access and audit in contract talks with their providers. For the outsourcing of less important functions, firms are meant to ensure the access and audit rights where warranted.

Duncan Pithouse

Duncan Pithouse, intellectual property and technology partner at law firm DLA Piper, says outsourcers have traditionally been reluctant to give their clients the degree of access mandated by the guidelines and firms have had to make compromises – for example, accepting limited access rights to premises that deliver shared services.

He adds that cloud providers have been particularly challenging to negotiate with, with many even refusing to tell firms where data services are located.

Charles Forde, who oversees third-party risk at UBS, echoes that, saying: “The [main] cloud providers are very clear they’re not going to allow every bank to come in and walk around their data centres.”

Big cloud providers are cagey probably because they service hundreds of firms and would rather keep disruptive inspections to a minimum.

In a survey of banks in Europe conducted in late 2017, 55% of respondents were already using public cloud services or were aware that another part of their bank was using such services. The cloud industry is dominated by three providers: Amazon Web Services, Microsoft Azure and Google Cloud. Between them, they hold 58% of the global market, according to estimates by technology consultancy Canalys.

Some say financial companies would find it easier to negotiate with their suppliers if the EBA guidelines included standard clauses to be inserted into contracts. But, despite requests, no such clauses have been included. An EBA spokesperson says it is not the regulator’s job to do this, noting that it is up to institutions to evaluate and manage the relationships they have with service providers.   

Shortcuts and workarounds

Firms may be able to overcome outsourcers’ resistance or, at least, reduce the burden of scrutinising multiple vendors if they organise audits together with other clients of the same vendor.

This is, for instance, what Deutsche Börse agreed with Microsoft earlier this year when it struck a deal to use Microsoft’s cloud services. Deutsche Börse will examine the technology company via regular pooled audits performed by a so-called collaborative cloud audit group, which was set up in 2017 and includes large European Union banks and insurers.     

“Performing such audits as a group has a lot of advantages,” says Michael Girg, chief cloud officer at Deutsche Börse. “You can make use of the diverse experience of the participating internal auditors of the respective financial institutions and save resources on both sides. The CSP [cloud service provider] only needs to host one audit at a time, and the participants can decrease the costs.”

However, the exchange group has also secured the right to audit Microsoft individually, he notes, declining to provide further details.

Charles Forde

Forde at UBS wants to see more collaboration between firms, arguing there is no competitive advantage to taking different approaches on things like the auditing of cloud companies. For example, the financial industry could come up with a standard list of questions for cloud providers, which could be supplemented with additional, company-specific questions, he says.

Firms can already use utilities such as TruSight and KY3P, set up by two different groups of big banks, to vet vendors. Each in its own way, the platforms gather information on providers and make it available to customers.  

Vendors themselves can help, too, by providing third-party or internal audit reports, as allowed by the guidelines.

For instance, Microsoft gives clients access to regular third-party audits, which cover controls for data security, availability, processing integrity, and confidentiality.

But such external stamps of approval do not let financial firms off the hook entirely. The EBA document says they can use external certifications and audit reports, as well as pooled audits, “without prejudice to their final responsibility regarding outsourcing arrangements”. When it comes to the outsourcing of critical or important functions, the EBA introduces another restriction, stating that financial companies should not rely solely on third-party certifications and reports “over time”.

A financial industry lobbyist flags a lack of clarity in the restriction: “How long can I rely on them? And how often? Every three years? Ten years?”   

Wider net

The burden of stricter rules on outsourcing is compounded by their extension to intragroup arrangements.

The industry tends to view outsourcing to a provider within the same group as less risky because firms should be able to influence internal providers more than they could with suppliers outside the group.

The EBA disagrees with the industry’s conclusion, though not with its reasoning. Its guidelines say that, when outsourcing within the same group, financial companies “may have a higher level of control over the outsourced function, which they could take into account in their risk assessment”.

Mark Kell

Mark Kell, a banking director at Deloitte, says smaller UK banks face a steeper climb to implement the intragroup guidelines than their larger peers because, until now, they have been under less pressure to document their intragroup service arrangements. For example, banks with more than £10 billion ($13 billion) in assets have already started documenting critical services and their provision under UK “operational continuity in resolution” rules.

But large financial institutions, too, have some way to go to be fully compliant.

Amit Lakhani, a senior executive in IT and third-party risk management at BNP Paribas, reckons the bank has around 60% of the requirements in place, noting that the main areas that need more work are the bank’s subcontracting arrangements and concentration risk analysis.

In my view, for many legacy arrangements with third parties, the contracts never had the subcontracting clauses or conditions in place

Amit Lakhani, BNP Paribas

Among a number of new rules on using subcontractors, one states that financial firms can allow their service provider to outsource the provision of an important function only if the subcontractor grants it the same access and audit rights as those granted by the first outsourcer. According to another rule, before entering into an outsourcing arrangement, firms should assess the associated risk, and suboutsourcing of important functions should be part of the risk assessment.

“In my view, for many legacy arrangements with third parties, the contracts never had the subcontracting clauses or conditions in place,” says Lakhani. “This implies that we do not have leverage to ask our third parties to provide data on their subcontractors or the control framework they have in place to manage their own third parties – subcontractors for us – in many cases… In addition, with data protection laws it becomes much more difficult to get confirmation from our third parties to attest that data is only within certain geographical perimeters.”

On concentration risk, Lakhani says: “While we have been closely monitoring our critical outsourcing relationships, concentration risk was never a key driver of this work.”

At least with the second type of concentration risk – stemming from outsourcing important functions to one of a few dominant suppliers – regulators may soon give firms a helping hand.

In April, the EU’s top financial supervisors, including the EBA, asked the European Commission to consider legislating for an oversight framework for critical providers of IT services, mentioning specifically concentration and systemic risks. “This [framework] will be particularly relevant in the near term for cloud service providers,” the three bodies said in a joint paper.

Risk.net asked Amazon, Google, Microsoft and Cispe, a trade body representing cloud providers in Europe, what they thought of the proposal, as well as for their views on the EBA guidelines. Microsoft declined to comment, Cispe and Google did not respond, while a spokesperson for Amazon Web Services said: “AWS is fully committed to helping our customers achieve compliance with the EBA outsourcing guidelines, where applicable and as they pertain to their use of AWS services.”

The new rules apply from September 30, 2019, although there is a transitional period for existing contracts until the end of 2021. In addition, if firms are unable to review their outsourcing arrangements for important functions by the end of 2021, they will have to outline a plan of action to their national supervisors.

Editing by Olesya Dmitracova