Risk Technology Awards 2019: Making machines more helpful

By Clive Davidson | Features | 18 July 2019

Machine learning can be too efficient; now, vendors are looking for ways to make it more accurate. Clive Davidson looks at the stories behind this year’s Risk Technology Awards

One of the attractive things about machine learning, in theory, is that it approaches tasks in a more human way – checking new information against past experience and getting smarter as it goes – while also having a superhuman capacity to process that data. It can be too efficient, though.

When applied in the world of risk monitoring, the always-on, always-willing, never-tired machines can generate more alerts than their human colleagues are able to investigate. Some of the winning vendors in this year’s Risk Technology Awards (RTAs) have been trying to solve this problem, creating something of a cognitive revolution, where humans and machines work side by side to tackle some of the biggest challenges to the safe and stable operations of financial markets and services.  

Full list of winners

Bank ALM system of the year
Moody’s Analytics

Best vendor for innovation

Best vendor for systems support and implementation

Credit data provider of the year
Moody’s Analytics

Credit stress-testing product of the year

Cyber risk/security product of the year

Enterprise-wide stress-testing product of the year
Moody’s Analytics

Financial crime product of the year

GRC product of the year

IFRS 9 – ECL modelling solution of the year
Moody’s Analytics

IFRS 9 – enterprise solution of the year
Moody’s Analytics

Managed support services provider of the year
Broadridge Financial

Market surveillance product of the year
Eventus Systems

Model validation service of the year

Op risk modelling vendor of the year
The Analytics Boutique

Regulatory capital calculation product of the year
IHS Markit

Regulatory reporting system of the year
Wolters Kluwer

Risk dashboard software of the year
The Technancial Company

Wholesale credit modelling software of the year
Moody’s Analytics

One example comes from the world of central limit order book trading, where market participants are under pressure to identify and catch spoofing and layering – strategies in which traders issue non-genuine orders to mislead others as to the level of supply or demand. 

“As an industry, we are having more spoofing incidents in both electronic and manual high-touch trading,” remarked one member of the RTA judging panel.

The problem with spoofing and layering is that they are complex and evolving behaviours that cannot be identified with a simple single check. The Validus platform from Texas-based Eventus Systems provides tens of parameters that can help identify spoofing and other illicit behaviours, but this isn’t a cure-all – setting the parameters widely enough to capture spoofing and other forms of manipulation can generate thousands of alerts a day, overwhelming the ability of most institutions to follow up. So, Eventus has trained machines to sift through the alerts and prioritise those that need the most urgent attention.

Eventus uses the results of investigations of alerts by its clients’ human analysts to continuously train a machine learning model to spot likely manipulators. “The model will look through the alerts that have been generated by the rules and come up with the top 10 or 20 that need further investigation, along with confidence levels,” says Travis Schwab, chief executive of Eventus, which won the Market surveillance product of the year award. 

Using machines to shortlist candidates for investigation hugely improves efficiency, while generating the original alerts from human-devised rules enables institutions to understand why they are targeting particular traders or firms and explain this to regulators who will not accept ‘black-box’ solutions.

In a recent example, an Eventus client decided to review activity on its trading platform with the view to clean up any undesirable behaviour. It set wide parameters on the Validus system, which then produced thousands of alerts a day – far more than the platform’s three compliance staff could deal with manually. However, the machine learning module was able to identify the most suspect cases, which the compliance staff were able to investigate quickly and terminate a few accounts where there was dubious activity. The result was an immediate drop in the number of alerts. 


Insider danger 

The combination of artificial intelligence (AI) and data from multiple sources is all about putting events and other pieces of information in context, says Craig Cooper, chief operating officer of California-based Gurucul, which won the Cyber risk/security product of the year award. This enables organisations to see patterns and spot anomalies, and thereby identify – and even predict – risky behaviour. “Traditional analytics tends to be rule-based solutions, focused on known transactional patterns. The power of today’s analytics allows businesses a much wider contextual view, which can be used to identify both known and unknown risky behaviour patterns. This approach results in fewer false positives and reduces investigation time significantly,” says Cooper. 

One area where this approach is proving its worth is identifying insider risks. By continuously monitoring employee activity across a number of internal systems, institutions can establish baseline behaviour patterns, or profiles, for individuals and then raise alerts when anomalies show up. This could include suspicious loan approvals, transaction overwrites, emails to competitor domains or unusual physical access to sensitive areas. One Gurucul user was recently able to predict the departure of a disgruntled individual and – potentially – prevent the person from stealing data, committing fraud or sabotaging systems. “An organisation’s insiders – especially those with privileged access to sensitive systems and data – can pose a serious risk to a financial operation,” says Cooper. 

Behaviour profiling is an approach IBM also uses in its financial crime solution. IBM Safer Payments can link in all systems through which an organisation interacts with its customers, plus any other relevant sources of information. Safer Payments monitors the data and activity in real time and – using IBM’s Watson AI technology – builds a picture of the behaviour of a customer, or other entity, across all the organisation’s channels of interaction, brands and payment types. The system can take in a wide variety of data, including payments, non-monetary events, and authentication and security data from multiple sources without requiring it to be converted to a fixed format. 

“Valuable information is usually lost in these data transformations, which affects both the system’s analytical performance as well as analysts’ ability to effectively work the alerts cases,” says Austin Wells, Watson financial crimes offering manager at IBM, which won the Financial crime product and Most innovative vendor of the year awards. The Safer Payments system is able to piece together both transactional and non-transactional elements from each channel and learns about behaviours over time, using AI as a ‘virtual analyst’ to assist human experts in finding threats and optimising defences against fraud. “But rather than generating a black-box model, the system generates easily readable scenarios as suggestions that the user can choose to deploy,” says Wells.


Back-office bots

The combination of AI and human intelligence is also beginning to find its way beyond the front and middle office. 

New York-based Broadridge, which won the Managed support services provider of the year award, expects operations to evolve to the point where experienced and expert humans, performing client-facing roles that can differentiate a company’s services, work alongside machines – or ‘bots’ – of various levels of sophistication that automate repetitive non-differentiating activities. 

“We expect these bots to work in either an ‘attended’ fashion with their human co-workers, or unattended, while still being monitored by human co-workers,” says Mike Alexander, head of North American wealth and capital markets solutions for Broadridge. “As bots mature, we expect to see a quantum leap in how services are rendered, from account opening and tax services to trade settlements, money movement and international operations.”

Broadridge has already taken a step in this direction with a product for trade allocations that uses human-assisted machine learning to take in non-standard trade allocations in various formats, such as PDFs, comma-separated value files or emails, and applies pattern-matching algorithms to convert them to allocations its middle-office system can recognise and process. “As more instances of this product are instantiated across our client base we expect to be able to get these machines to share patterns across each other rather than being assisted by human labour,” says Alexander. 

Technology providers are also starting to investigate a cognitive approach to systems implementation and support. “We are looking to use AI to automate and provide a better service for both support and operations,” says Rohan Douglas, chief executive at New Jersey-based Quantifi, which won the Best vendor for systems support and implementation category. This could include AI tools for implementation configurations, as well as automation to provide quick responses to system issues, and system monitoring for pre-emptive actions to avoid issues. “The idea is to keep the personal support, but supplement it with tools to make the people more effective,” says Douglas. 

But there are inherent dangers in the current enthusiasm for AI-based modelling, especially where there is a close coupling of models and their developers, warns Jos Gheerardyn, chief executive and co-founder of Yields.io, which won the Model validation service of the year award. With an abundance of commercial and open-source analytical and AI tools now available, it can be quick and easy to develop models for a variety of applications. However, he warns the industry not to forget the statistician’s aphorism that ‘all models are wrong; some are useful’, and to prioritise model risk management.

To highlight the issue, Gheerardyn points to the interdependency between humans and models often found in front offices. “Many front-office quant teams in banks often become an indivisible part of their own analytics. These teams are constantly needed to fix issues as they appear, to fine-tune calibrations and perform small modifications. When the quants are gone, the models have to shut down,” says Gheerardyn.

This hybrid ‘human-algo’ approach is not sustainable given the rapid growth of models in financial institutions. What is needed is a methodology that from the outset takes into account that a model will at some point fail. “To manage that risk, the design of the model should focus on risk management, studying data quality, quantifying model risk and determining the feasibility of monitoring,” says Gheerardyn. 

These factors should be weighed against the potential benefits of the model, as well as the risk appetite of the bank, enabling the institution to choose the most appropriate solution – a complex model, a simple one, or no model at all. “This exercise at the beginning of the cycle will yield a design that allows for models to be deployed in a robust fashion with clearly defined limits that can be monitored and managed in a completely automated fashion.” 

The combination of machines, models and big data means institutions now have not only tools for automating mundane repetitive tasks, but tools to aid seeing the big picture and prioritising where human expertise can be most productively directed. But as machines take on more responsibility, they must be subject to the same rigours of risk management as their human colleagues.


‘R’ is for modelling

This year’s awards also provided further evidence of the shift towards open-source technology – including the increasing use of the R programming language for modelling.

“While IT departments looking for an end-user computing language tend to prefer Python – which is developer-led and has established tool chain support – R remains the language of choice for the majority of statisticians and is often used by desk quants,” says Ian Green, an independent consultant and member of the RTAs judging panel. “Over recent years R has developed significantly and now offers a powerful platform for [transforming], analysing, charting and presenting data.” 

New York-based AxiomSL supports R along with other modelling tools including Excel and Java for use with its risk and compliance management systems. “We have had native R functionality for over seven years. We can easily push model parameters and information into and out of R code while maintaining traceability of data,” says Richard Moss, global product manager for capital and liquidity, at AxiomSL. 

In a recent implementation of its ControllerView platform system for International Financial Reporting Standard 9 compliance, a bank required its suite of Excel models for probability of default, loss given default and macroeconomic scenarios to be migrated to R. “The bank was looking for a flexible, industry standard code base that can scale well and be easily repurposed,” says Moss. AxiomSL’s implementation team was able to migrate the bank’s models to produce the same results as under Excel, with full documentation that met the local regulator’s approval. 

Migration to R proved to have further benefits. “In AxiomSL’s R-based environment, the model assumptions and calculations, previously hidden in the deeply layered Excel environment and lacking documentation, suddenly became obvious and accessible. The opportunity to examine the migrated models with fresh eyes led the bank to re-evaluate its assumptions and make improvements that can enhance expected credit loss steering and the bank’s overall strategic and operational decision-making,” says Moss. 

The adoption of R is part of a trend away from costly proprietary tools towards more open software, says AxiomSL. “Banks are moving toward open-source tools that have strong statistical libraries, data fundamentals and performance optimisation,” says Moss.

Moody’s Analytics has also recognised that more institutions are choosing to develop models in R, and its Scenario Analyzer stress-testing framework supports models developed in a number of scripting languages. 

“Several organisations have supplemented the models developed in the Scenario Analyzer framework with R or Python script models. As a result, banks gain the flexibility to use existing investments in technology, as well as their own organisational capabilities, while adhering to their internal technology policies,” says Steve Tulenko, executive director, enterprise risk solutions at Moody’s Analytics, which won six awards including Enterprise-wide stress-testing product of the year. Moody’s Analytics is also now using R and the R Shiny development package for some of its own product developments.

Green adds that R can also now be used for hosting interactive websites, accessing cloud databases, writing HTTP servers and running machine learning algorithms that require parallelisation.


Technology vendors were invited to pitch their products and services in 23 enterprise, credit and operational risk categories. Candidates were required to answer a set of questions within a maximum word count about how their technology met industry needs, its differentiating factors and recent developments. A total of 133 entries were received. 

A panel of 10 industry experts and Risk.net editorial staff reviewed the shortlisted entries, with judges recusing themselves from categories or entries where they had a conflict of interest or no direct experience. The judges scored and commented on the shortlisted entrants. The majority of the judges met to review the scores and, after robust discussion, made final decisions on the winners. Where there was no credible winning candidate, the category was scrapped. In total, 19 awards were given this year.


The judges

Amit Lakhani, Head of operational risk controls for information communication technology and third-party management for corporate and institutional banking, BNP Paribas

Deborah Hrvatin, Managing director and global head of institutional clients group operational risk management, Citi

Glenna Hagopian, Chief conduct officer and head of enterprise risk management (ERM), Citizens Financial Group

Hugh Stewart, Adviser, Chartis Research

Ian Green, Chief executive, eCo Financial Technology

Matt Sulkey, Managing director and head of ERM framework and governance, TIAA

Peter Quell, Head of portfolio analytics for market and credit risk, DZ Bank

Sid Dash, Research director, Chartis Research

Clive Davidson, Contributing editor, Risk.net

Duncan Wood, Global editorial director, Risk.net


Read more articles from the 2019 Risk Technology Awards

Synthetic securitisations and Europe’s capital sweetener

By Samuel Wilkes | Features | 18 July 2019

Regulator weighs high-quality label for synthetic deals, but without favourable capital treatment

Hit TV series Westworld features a futuristic theme park filled with synthetic humanoids programmed to act out specific roles in a story. The show asks the ethical question: should the synths be treated like humans?

European financial authorities are grappling with their own synthetic conundrum. Should balance sheet synthetic securitisations be treated like other, cash deals? Specifically, should they qualify for a regulatory label designed for high-quality securitisations, which lowers the capital requirements for banks and insurers?

Financial institutions would welcome lighter capital treatment for synthetic deals that achieve the simple, transparent and standardised (STS) label, as they adjust to stringent new risk weights for securitisation under incoming regulation.

“A key reason people are keen to get STS for synthetics is because it preserves some of the efficiency that the old framework used to have with lower risk weights,” says Robert Bradbury, a managing director at advisory firm StormHarbour Securities.

Investors and issuers say synthetics won’t benefit much from the STS badge alone, and struggle to see the point of the label if there is no favourable capital treatment.

“I was surprised by the fact synthetics could get STS but without preferential capital treatment,” says Thomas Wilson, a director of securitisation and covered bonds at Rabobank. “I don’t think it would have much of an impact if there was no preferential treatment. The benefit of STS is mostly for the bank in terms of capital. I don’t think the current investors are looking for an STS-like stamp.”

Regulators are yet to make up their minds. Christian Moor, principal policy officer at the European Banking Authority, told an industry conference in June that most European regulators had accepted balance sheet synthetic securitisations can qualify for the STS label, but were undecided as to whether exposures to qualifying transactions can benefit from the lower capital requirements.

“In the regulatory community in Europe, most of the regulators accept it is technically possible to create criteria for balance sheet synthetics,” Moor said. “The question is now whether they receive preferential capital treatment or not. Should it just be the framework but not the capital? Those are the things we are still discussing and hopefully by September you will see the first indications in our discussion paper.”

Revenge of the synth

Lenders use balance sheet synthetic securitisations to transfer the credit risk on a portfolio of loans to investors. Banks can deduct from their total risk-weighted assets the amount of risk transferred, as long as local supervisors give their blessing to the deals in accordance with bank capital laws enabling significant risk transfer.

The instrument can take many forms but a common method used by banks is to buy protection from a special-purpose vehicle on an underlying loan portfolio using a credit default swap or financial guarantee. The special-purpose vehicle then issues credit-linked notes in tranches. Investors buy the junior tranches, while the bank retains the senior tranches.

Synthetic securitisations containing portfolios of debt from small and medium-sized enterprises (SMEs) are one of the most common loan types. Such deals can already benefit from lower capital requirements under the STS regime. To do so, they must meet a strict set of criteria (see box: A helping hand to SMEs).

A likely reason for European regulators’ foot-dragging over whether to apply lower capital to synthetic STS is that doing so would create a deviation from standards drafted by the Basel Committee on Banking Supervision, agreed between European representatives and their foreign counterparts.

The committee’s final standards published in July 2015 only allow true sale securitisations to qualify for the Basel version of the STS label, known as simple, transparent and comparable, and explicitly bar structures that transfer risk through credit default swaps or guarantees.

Assessing the arguments for and against at the June industry conference, Jana Kovalcikova, a policy expert at the European Banking Authority, said: “On the one hand, we have put out data that shows the performance of synthetics is similar to traditional and also that [they have] good performance and low default on senior tranches. On the other hand any preferential treatment will not be Basel compliant. As a prudential regulator we always want to take into account Basel developments and it is very difficult to move away.”

A helping hand to SMEs

In a bid to help support economic activity within the politically important SME sector, Europe’s lawmakers allow financial institutions to apply lower capital treatment to SME synthetic securitisations under certain conditions. The conditions are laid out in amendments to the 2013 Capital Requirements Regulation.

They include the following:

  • the bank’s positions must be senior
  • underlying exposures must follow clear eligibility criteria
  • active portfolio management of the underlying exposure is not allowed
  • interest payments must be based on generally used market interest rates
  • issuers must provide data on dynamic and static historical default and loss performance for exposures that are substantially similar to those being securitised
  • more than 70% of the borrowers in the pool must meet the European Union’s definition of an SME

Under the new Securitisation Regulation, which came into effect in January 2019 and is meant to revive Europe’s moribund securitisation market, the EBA must report back to the European Commission on whether the STS label is feasible for balance sheet synthetic deals. The report will be the first step towards synthetics becoming eligible for STS.

Once the EBA has published its report, the European Commission must present legislative proposals to the Council of the EU and the European Parliament for an STS label for synthetics. Getting the nod from parliament and council may be difficult as securitisations – particularly synthetics – bring back ugly memories of the 2008 financial crisis.

“I’m curious to see how this will go in the legislative process,” says Wilson of Rabobank. “Sympathy for the product may not be there amongst everyone in EU legislative bodies and also I’m very curious to see whether STS for synthetics will be introduced for all asset classes or whether it will remain very political only sticking to SME type transactions, which is very well possible.”

Hints of an SME-only label for synthetics are already present in the current Securitisation Regulation text, which explains the commission should draft its legislative proposal “with a view to promoting the financing of the real economy and in particular of SMEs”.

Limiting the STS label to SME portfolios will prevent other asset classes from benefiting from the preferential treatment capital treatment that the label might bring. Although SME portfolios are the most common loan types in synthetics, the credit risk of large corporates is also securitised synthetically. Loan documentation often doesn’t allow banks to transfer the loan of a large corporate to another party, which leaves synthetics as the only avenue since true sale securitisations transfer the whole loan to an investor.

Other loan types securitised through synthetics but not to the same extent as SME and large corporate loans, include trade finance and project finance.

Weighting game

In practice, the Securitisation Regulation hikes the capital requirements for asset-backed securities. Under previous rules, where risk weights for the senior tranche of a securitisation are calculated using the internal ratings-based approach, there is a floor of 7%. The new regulation more than doubles this floor to 15%. For deals that meet the STS criteria, the floor drops to 10%.

“With the revised formula in place at the start of this year, the new risk weight floor is normally 15%, which is quite an increase from a 7% floor and is quite burdensome for the economic benefit of these transactions,” says Wilson of Rabobank.

Capital relief for banks forced to hold senior tranches of synthetic securitisations could provide a boost to the economics of issuing such deals. For a transaction to be profitable, the capital costs of holding senior notes and the coupon paid to junior noteholders must be lower than the cost of capital for the credit risk on the loan portfolio being transferred.

“Preferential capital treatment makes the transaction more cost-efficient for banks and banks are therefore likely to issue more transactions, in turn providing greater scale and choice to investors like us,” says Kaikobad Kakalia, chief investment officer at private debt manager Chorus Capital Management.

The move would have limited financial value for investors such as Chorus, though.

“STS with lower capital requirements is useful for us in an indirect way because it incentivises more volume,” Kakalia says. “However, there is very limited direct benefit for us as an investor.”

Some even suggest that the injudicious use of the STS label could leave banks exposed to extra costs in the form of increased operational risk capital from the risk of regulatory penalties.

Issuers of trades that are given the STS label but later turn out to be non-compliant can face an administrative sanction from their local supervisor of no less than €5 million ($5.6 million) and up to 10% of their total annual net turnover. They could also be given a temporary ban from classing a securitisation as STS.

“If there is more risk associated with applying for an STS label from banks possibly being fined if they make a mistake, it could introduce operational risk and banks would have to resource some capital for that risk,” says Mascha Canio, head of credit and insurance linked investments at pension fund manager PGGM. “If that is the case, STS may actually lead to an increase in capital, which is obviously not going to be helpful.”

Mixed incentives

Regulators are enthusiastic about the residual benefits that the STS badge might confer on synthetic securitisations, even without capital relief. Issuers and investors are less excited.

At the June conference, Kovalcikova of the EBA said an STS label could promote greater standardisation between structures banks use. Bradbury of StormHarbour Securities points out how this could advantage banks: “If the market became more standardised, the various regulators would have less need to adjust requirements for acceptable risk transfer practices and this in turn would likely help banks to do capital planning more effectively.”

Most sources agree that more standardisation is desirable in the synthetic market but question whether STS alone would achieve that as banks have little other encouragement to issue STS-compliant synthetics if there is no preferential capital treatment.

“The only reason why STS is useful is because it helps the bank reduce its cost of capital and in turn creates an incentive for the bank to issue more transactions,” says Kakalia at Chorus Capital. “I would not expect a bank to feel incentivised to structure a transaction to meet STS requirements if they are unlikely to get any capital benefit.”

Even if the STS label stimulates greater activity in the market, some warn about the dangers of new, inexperienced investors relying on the official imprimatur without performing due diligence on deals.

“In general it is good to have more investors in this market but I don’t think it is very prudent for less knowledgeable investors to begin investing in these types of transactions solely as a result of a synthetic STS label,” says Rabobank’s Wilson.

Increased activity could also drive down prices for the product, which may affect the make-up of the investor base. Lower returns could ward off hedge funds and specialist credit funds that currently invest in the market. Bradbury of StormHarbour Securities suggests the vacuum would be filled by large insurers and real money firms – investors that are more likely to offload the risk at times of volatility.

“In the event of a more serious shock or a downturn, this tool might become unavailable quickly because those investors are typically more sensitive to wider market movements,” he says.

Contradiction in terms

There is a creeping suspicion that synthetic STS is a paradox: synthetic securitisations are, by nature, complex yet the STS label is designed as a badge of simplicity and standardisation.

“All risk transfer trades are fairly bespoke. ‘Simple’ criteria for synthetics is therefore somewhat of a misnomer,” Bradbury says. “What they mean is that there shouldn’t be any features which are, if you like, highly bespoke.”

The EBA hopes to tackle this contradiction by addressing three areas for STS compliance, which Kovalcikova outlined at the conference. First, measures to mitigate counterparty credit risk in the structure. Second, acceptable structures for transferring credit risk, which was among recommendations in a discussion paper published by the EBA in 2017. Third, a synthetic-adapted set of the original STS criteria.

Investors and issuers agree the first condition is necessary and is already ingrained in most – if not all – synthetic deals as neither party is interested in the risk of the other failing to pay its obligations.

“We actually don’t deposit our capital with the bank that is issuing the risk-sharing transaction because we believe in taking the credit risk of the underlying portfolio and not taking the credit risk on the bank itself,” says Canio of PGGM. “We would certainly be supportive if that is one of the criteria in STS.”

PGGM funds the full notional amount of the investment into a separate account, which the bank can draw on if the investor becomes insolvent. The cash is then invested in highly rated government bonds – for example US Treasuries or German government bonds – and held by a third-party custodian.

In practice, Bradbury says most risk transfer deals would already comply with aspects of the STS rules. For example, most trades reference asset pools that are fairly homogenous, as required under the Securitisation Regulation. And very few existing deals reference exposures that are already suffering credit impairment at the time the synthetic securitisation is issued, which is forbidden for STS transactions.

But there is wider uncertainty about whether an adapted set of STS criteria will work for synthetics, particularly disclosure requirements.

Issuers are wary of disclosing deal information because it often contains data that is sensitive to the bank. They make an exception for investors, who receive a wealth of information on the portfolio, loan origination, underwriting credit modelling and risk management processes.

“There is significant disclosure in these transactions, because without it we would have insufficient information to do our analysis and price the transaction,” says Kakalia of Chorus Capital. “However, these transactions are typically private. If you are not the issuer and you are not the investor then the trade is not available for you to look at.”

As viewers of Westworld understand, synthetic androids may look and act like humans, but eventually their differences become evident.

Excessive spread

Synthetic securitisations use special-purpose vehicles (SPVs) to structure transactions. These SPVs often accumulate some of the excess spread between the coupon payable to noteholders and the yield on the underlying assets. This is sometimes used to pay junior tranche holders for expected losses on the portfolio as set out in offering documents.

The junior notes take a hit only on unexpected losses. This use of excess spread allows the issuer to avoid paying a double premium to investors for both expected and unexpected losses.

The EBA has raised multiple concerns over the use of excess spread, including that it may hide the risk retained by issuers in synthetics and erode the effectiveness of the risk transfer.

In response, the EBA has made several recommendations. First, excess spread should be capped at one times expected annual losses, to make sure it is not used to cover unexpected losses as well. Second, issuers should not syphon off any excess spread that hasn’t been used to absorb losses in a single year, but should leave the excess spread in place and make an adjustment each year if necessary. Third, originators should deduct the excess spread from their regulatory capital, as if the bank were holding a first-loss tranche.

The second and third restrictions will potentially eliminate the inclusion of a so-called ‘use-it-or-lose-it’ mechanism in SPVs, which allows banks to take back any spread not used to cover losses. This mechanism has previously been accounted for as future income by some issuers.

“Most of the investors and issuers I have spoken to that use the ‘use-it-or-lose-it’ are of the opinion that excess spread is something that you don’t hold capital against, as it is future margin income, and therefore it should not be deducted from your capital once you do such a transaction,” says Thomas Wilson at Rabobank.

Others claim the rules mean banks must cherry-pick higher quality assets with lower expected loss rates to securitise. This will limit the size of the excess spread required in the SPV, to avoid a larger deduction from regulatory capital.

“Some issuers potentially have whole programmes of risk transfer securitisations where their use of excess spread will change completely,” says Robert Bradbury of StormHarbour Securities. “There are some banks who believe that the excess spread limitations incentivise focusing on better quality assets with lower expected loss, which is not necessarily a consequence intended by the regulator.”

For some issuers, however, the tougher treatment of excess spread will remove a competitive disadvantage, as their local regulators had never allowed such a mechanism to be used in the first place.

Aside from the recommendations on excess spread, other elements of the EBA discussion paper have proven uncontroversial. For instance, regulators expect principal repayments to be distributed to senior noteholders first of all, to ensure that the first-loss risk has genuinely been transferred away from the bank on to the holders of the junior tranches. Issuers would be allowed to distribute payments equally to noteholders – known as pro-rata amortisation – only once certain conditions are met. For example, if cumulative losses are greater than the cumulative expected losses reported by the originator, then the transaction could switch to pro-rata amortisation.

Wilson feels the EBA recommendations are a sensible blueprint for changes to synthetic structures.

“The discussion paper is already quite a good set of criteria and I don’t think we need any further restrictions, for example eliminating pro-rata amortisation completely or eliminating synthetic excess spread for the synthetic to meet a high-quality label,” says Wilson.

Editing by Alex Krohn

Goldman Sachs builds legal reserves

By Alessandro Aimone | Data | 16 July 2019

Goldman Sachs put aside $66 million to cover legal costs in Q2, up 78% on the first quarter.

Chief financial officer Stephen Scherr said the bank was adjusting its estimate of ‘reasonably possible losses’ (RPL) to $2.5 billion from $2 billion. This is the bank’s guess of its aggregate losses above accumulated reserves for legal costs that may or may not be incurred.

Year-to-date, legal provisions are $103 million, compared with $192 million for the first half of 2018.

Who said what

“We don't give details on what the elements of that reserve are … In terms of statements otherwise being made [on 1MDB], I think it would be inappropriate for me to sort of speculate on what others intend by statements they made. I think as we have said in the past, we're in a cooperative engagement with the authorities. We intend to stay that way. And as and when there are further developments, we'll be in a position to talk about that a bit more” – Stephen Scherr, chief financial officer at Goldman Sachs.

What is it?

Banks put aside provisions to cover costs incurred for transgressions against applicable laws and regulations and their associated legal expenses.

Why it matters

Goldman Sachs is involved in a series of ongoing judicial and regulatory battles, including those relating to the sale of mortgages and alleged manipulation of foreign exchange markets.

The case that’s grabbed headlines, though, concerns the bank’s involvement in bond issues that it arranged for 1Malaysia Development Berhad (1MDB), a Malaysian sovereign wealth fund that was looted by corrupt politicians. 

Goldman Sachs has consistently denied wrongdoing. The Malaysian criminal case against the bank will begin this September.

Whatever the reason, the raising of the RPL estimate by $500 million suggests the bank is bracing for potential future legal losses. But even if its upper bound estimates were realised, the damage to its bottom line would be nothing like that incurred in 2015, when billions were paid out to US authorities to settle allegations the bank mis-sold mortgage-backed securities.

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights.

What could have triggered the spike in legal provisions? You can drop us a line at alessandro.aimone@risk.net, send a tweet to @aimoneale, or get in touch on LinkedIn

Keep up with the Risk Quantum team by checking @RiskQuantum for the latest updates.

Tell me more

Legal fines dent StanChart profits

Reserve release fluffs UniCredit's Q1 income

Legal charges topped £6 billion at UK banks in 2018

View all bank stories

Is there safety in numbers for op risk? One regtech thinks so

By James Ryder | Profile | 15 July 2019
Paul Ford

Acin’s library of op risks allows banks to compare their controls with their peers’

Given its sheer breadth, operational risk manages to be at once the biggest, flashiest, most out-of-control of hazards, yet also the one that is the most sub rosa, anecdotal and impervious to metrics.

A young regulatory tech firm claims to have an answer. Acin, just two years old, has been compiling a reference library of op risks and controls among the 12 banks it now serves. The idea is to standardise risks and controls across not only a particular bank, but across the industry, to give clients a sense of how they stack up to their peers, should regulators come knocking.

“Banks want to measure up against their peers,” says Paul Ford, founder and chief executive of Acin. “We’re moving controls from an artisan process to an engineered process.”

Already, the regtech, which is based in London, has netted several top-tier banks, among them Credit Suisse, Societe Generale and Standard Chartered.

But Acin’s ambitions are greater: it hopes to eventually introduce ratings of banks’ op risk controls, similar to those that offer a gauge of a lender’s creditworthiness.

Ford – a former British Army officer who later served as chief operating officer at Credit Suisse’s Europe, Middle East and Africa division and latterly at Barclays Wealth – is clearly enthused by the idea of putting a number to a bank’s “conduct and culture”: the intangibles that go to the heart of op risk.

But for now, the regtech is working on building its library. Each member in the network volunteers its library of non-financial risks and controls. The delivered data is reviewed and scrubbed, then added to a centralised database of advanced control identification numbers, or Acins, presented in a clear and accessible form to all subscribers.

Ford’s message is there is greater safety in a collective. The member banks can measure themselves against a collective benchmark, confident their risk and control library is thorough and – perhaps more to the point – the same as those of their peers.

“You can’t have tolerances unless you have standardised measures,” Ford tells Risk.net.

In action, the platform itself is straightforward: a clean baby-blue interface that allows one to search with varying degrees of specificity – by business line, control type (“detective or preventative”), control owner and so on. Select a particular risk – say, internal fraud – and the database will spit out a number of control suggestions.

A big part of standardisation is clarifying terms. One way banks try to combat internal fraud is through a yearly ‘network cleanse’, during which staff are not allowed access to their employer’s systems for two weeks. But one bank might call it ‘compliance leave’, while another calls it ‘block leave’ and yet a third might refer to it as ‘mandatory time off’, Ford notes.

Further, control names and descriptors often vary internally across business lines. A large bank might have three different divisions that refer to the same control by three different names, scuppering any chance of benchmarking and, with it, an assessment of whether the control is actually working or not.

The industry is taking note: Cristóbal Conde, a seasoned fintech investor, was one of the firm’s early backers.

“There’s been an engineering-led approach toward market and credit risk for 30 years – why is it that operational risk has always been anecdotal? How many examples do we need?” says Conde, who is an adviser to Acin. “What they’re saying is, these problems aren’t unknown unknowns. They’re knowable unknowns.

We say: ‘Here are the answers. Here are the risks and controls you should have.’ And then a firm can say to the regulator: ‘I have identified my gaps, and I have a plan to implement. I’m being proactive.’

Paul Ford, Acin

“Op risk has been talked about for 15, 20 years, but always in words and anecdotes and opinions. That’s why I invested. It’s the first start-up that takes an engineering approach to the problem.”

Ford says the basic, foundational library put together by Acin’s team before any bank had joined contained “80–90%” of the risk and control data it now has. Another stat the regtech likes to flaunt is that every firm it assesses, on average, lacks 32% of the combined risks and controls in the industry database.

Ford adds that, while the service can acquire new insights from every new member, it’s rapidly reached the point of diminishing returns.

“We’ll learn a little from the first bank,” he says. “We learn a bit less from the second than from the first. By the time we get to the third bank, we’ve probably learned all there is. The fourth doesn’t give us anything. That’s the network effect.”

The benefits of a broader, standardised control library also become apparent during supervisory visits, Ford says.

“If the regulator came the day before a firm worked with us, the gaps would be apparent,” he says. “We come in and show the baseline. We say: ‘Here are the answers. Here are the risks and controls you should have.’ And then they can say to the regulator: ‘I have identified my gaps, and I have a plan to implement. I’m being proactive.’ That’s a very different conversation to: ‘I didn’t know, and I wasn’t doing anything about it.’”

Membership in Acin, its heads argue, is also an easy way for banks to demonstrate the kind of collaborative spirit regulators are currently so taken with. A firm pooling its information for the good of a wider network will appear a lot more communally minded that one that isn’t, Ford points out.

The regtech’s ultimate goal is to elevate operational risk to a discipline that can stand alongside credit and market risk, he says. The field is dominated by backward-looking metrics, but there is no reason that can’t change “with the right datasets”.

Ford hopes the regtech can be the first to come up with a reliable method of calculating and modelling the types of expected losses caused by ‘fat fingers’, rather than by the probabilistic defaults of counterparties.

Acin’s success, he thinks, could lead to “the creation of a control rating – the equivalent of a credit rating for credit risk. A control rating for operational risk will take our data as a critical input”.

Such a control rating would, the firm hopes, help markets and regulators alike assess the health of banks in a new way. Ford says that calculating a firm’s control rating would rely on Acin’s data in much the same way that a credit analyst looks at financial performance for a credit rating. The regtech’s standardised control sets, and the benchmarking and peer comparison they allow, would function as “anchored sets of data” from which quantitative assessments of control performance could be derived, he says.

“That’s the level at which we stop,” says Ford. “We’re an information provider, and not an analyst. Other firms will do that.”

Editing by Joan O’Neill

UK bank RWAs inch up on credit and counterparty risk

By Alessandro Aimone | Data | 15 July 2019

Total risk-weighted assets (RWAs) across UK banks edged upwards in the first quarter of the year, pushed higher by increased credit and counterparty exposures. 

Figures from the Bank of England show total RWAs rose £19 billion ($24 billion), a shade under 1%, to £2.85 trillion in the three months to end-March. Credit and counterparty RWAs drove the overall rise, growing £24 billion (1.2%) to £2.07 trillion.

In contrast, aggregate market RWAs and operational RWAs dropped by less than 1% each, to £364 billion and £305 billion, respectively. Credit valuation adjustment (CVA) RWAs were also down quarter-on-quarter, by 6.5% to £87 billion.

Other RWAs, those related to settlement risk, securitisation exposures and regulatory adjustments, surged 33% to £28 billion quarter-on-quarter.

Compared with Q1 2018, total RWAs are down 1.3%. 

UK banks remain well-capitalised, with the sector’s total capital ratio at 21.2%, just 10 basis points lower than the previous quarter.

What is it?

The Bank of England publishes quarterly statistical releases on the capital levels and RWAs of the UK banking sector.   

RWAs are used to determine the minimum amount of regulatory capital that must be held by banks. Each banking asset is assessed on its risks: the riskier the asset, the higher the RWA and the greater the amount of regulatory capital that must be put aside.

Why it matters

Credit and counterparty RWAs increased for the first time since Q2 2018. This likely reflects business growth over the first quarter, though a deterioration in the credit quality of outstanding loans could also have contributed.

On the flip side, CVA RWAs declined for the third consecutive quarter. This could be a side-effect of banks funnelling more and more derivatives trades through clearing houses, which frees them from the need to hold CVA capital against them. But the largest UK banks actually saw their CVA RWAs increase quarter-on-quarter, perhaps a reaction to greater non-cleared swaps activity by these big-hitters.

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights.

What do you make of the latest BoE figures? You can drop us a line at alessandro.aimone@risk.net, send a tweet to @aimoneale, or get in touch on LinkedIn

Keep up with the Risk Quantum team by checking @RiskQuantum for the latest updates.

Tell me more

Lower credit risk shrinks UK banks’ RWAs

Top UK banks' CVA charges up 10% in Q1

View all regulator stories

‘Bad banks’ through the ages

By Alessandro Aimone | Data | 10 July 2019

Deutsche Bank is setting up its second ‘bad bank’ since the financial crisis as part of its latest restructuring effort. It's a strategy that's been used by seven other big lenders in recent years to warehouse and unwind unwanted assets.

Barclays, Bank of America, Citi, Credit Suisse, Lloyds, RBS and UBS all established special units to isolate toxic or illiquid investments post-crisis, some of which accounted for over one-third of their total risk-weighted assets (RWAs) at inception.

Deutsche’s planned ‘bad bank’ – to be called the capital release unit – will take on 21.1% of its total end-2018 RWAs. This is proportionally smaller than a previous entity, the non-core operations unit, which made up 25.8% of RWAs when it was introduced in Q3 2012.

Other European ‘bad banks’ have been on a similar scale. UBS’s non-core and legacy portfolio unit, set up in 2013, made up 36.7% of a total of Sfr259 billion ($263 billion) of RWAs at inception. Credit Suisse’s strategic resolution unit, cooked up in 2015, became the new home of 21.3% of its Sfr290 billion RWAs. 

UK banks Lloyds, Barclays and RBS also set up wind-down entities post-crisis. Lloyds’ non-core unit, created in 2010, held 34.7% of total RWAs of £406 billion. Barclays’ non-core entity made up 25.2% of a total of £436 billion when it was established in 2013. RBS’s capital resolution unit made up 17% of £386 billion of total RWAs when it was set up in 2013. 

US giants Citi and Bank of America also formed ‘bad banks’ in the past, though their respective RWA values were not disclosed.

However Citi’s, set up in 2009 as Citi Holdings, owned $649 billion of assets as of the end of the second quarter that year, about 35% of the group’s total assets.

Bank of America’s resolution division, known as legacy assets and servicing, was established in Q1 2019 and owned $169.1 billion of the firm’s loans, about 18% of its total, at inception. 

What is it?

‘Bad banks’ are pens for unwanted assets that are segregated from the performing, or ‘core’, assets of a firm. By separating out these toxic investments, investors get a clearer look at the parent bank’s financial health.

Typically, the assets in a ‘bad bank’ are unwound or sold, so that the parent company is able to free up capital that can be used to plump regulatory buffers or give back to shareholders.

Why it matters

The bigger they are, the harder they fall? A ‘bad bank’ chock-full of unwanted assets could take a long time to unwind, dragging on a firm’s profitability for years.

Deutsche’s first resolution unit was abolished in late 2016, four years after its inception. Its successor is scheduled to shutter in 2022, just three years from now. Yes, the capital release unit is smaller in terms of RWAs, but not by much.

‘Bad banks’ are also home to assets with high risk densities, as viewed through a regulatory capital lens, making them expensive to prospective buyers. This can frustrate efforts to speed up a wind-down through asset sales.   

Yet there are success stories. Credit Suisse closed its ‘bad bank’ ahead of schedule earlier this year. In 2016, the unit cost the group $3 billion in losses. This year, its drag is projected to be just $500 million.

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights.

Share your thoughts with us. You can drop us a line at alessandro.aimone@risk.net, send a tweet to @aimoneale, or get in touch on LinkedIn

Keep up with the Risk Quantum team by checking @RiskQuantum for the latest updates.

Tell me more

Restructured Deutsche would be slimmest eurozone G-Sib

End of an era: Credit Suisse dissolves resolution unit

View all bank stories

Asia-Pacific banks revise conduct scorecards in culture push

By Aileen Chuang | News | 10 July 2019

DBS, Maybank and others tweak performance metrics to reward good behaviour over hard sales

With regulators from Sydney to Singapore turning the screws on the conduct risk framework, banks in the region are making structural changes to the way they measure employee performance.

Take the case of Maybank, Malaysia’s largest lender by assets, which has overhauled the individual compensation model across its network. When evaluating a staff member’s performance, the bank now takes into account metrics such as client satisfaction and ethical behaviour, alongside financial targets.

The bank uses an assessment tool known as a balanced scorecard, which rates employees against key performance indicators (KPIs). Scores can be used to set remuneration.

“If you are found to be negligent, resulting in non-compliance of a particular regulatory requirement – even though you may have met all your other KPIs – the single not-met can knock you off,” says Jon Yeo, chief compliance officer at Maybank’s Singapore unit. Sanctions include pay reduction, delayed promotion or even sacking.

“That’s the severity of our balanced scorecard. It tells people the importance we attach to every KPI that is set in our institution,” Yeo adds.

The changes come as Singapore’s regulator readies its long-awaited regime governing personal conduct – a move that brings it in line with other jurisdictions such as Hong Kong and Australia. Scorecards are just one tool banks are deploying: in Australia – following a scathing review into conduct in the financial sector – ANZ has altered the way it incentivises branch staff by abolishing sales targets.

Singapore lender DBS subjects all its employees to a balanced scorecard that it says is now more forward looking, while Commonwealth Bank of Australia has tweaked the weightings of the components that make up its scorecard.

Maybank has removed all weightings from the scorecard and has rolled out a binary system of met or not-met. It has also standardised the wording of the scorecard for indicators such as financial targets, regulatory requirement and customer feedback. Previously, individual business or support units were responsible for setting their own metrics. Yeo says the changes have combined to reduce the number of regulatory breaches.

“Consistency is always the challenge that banks face when applying a balanced scorecard across organisations,” Yeo says. “We want to achieve a more consistent way of measuring it even in the cases where we notice there are wrongdoings, so the punishment is consistent as well.”

The revisions go some way to meeting increased scrutiny from regulators across the region. Australia imposed an accountability code known as BEAR, or the Banking Executive Accountability Regime, last year. The framework clarifies the roles and responsibilities of bank leaders and comes after an enquiry into misconduct in the financial services industry. The probe, which prompted high-level resignations within banks, found that lenders encouraged mis-selling of products with skewed incentive schemes, among other malpractices.

In response, Commonwealth Bank of Australia capped the weighting of financial metrics such as sales targets at 30% of an employee’s balanced scorecard. The limit applies to retail customer-facing staff and their managers. The bank also started to reward branch tellers based on customer feedback and manager observations. Similarly, ANZ removed sales targets on its branch tellers to prevent a repeat of missteps.

MAS recently formed a culture and conduct steering group with the Association of Banks in Singapore

Hong Kong brought in a similar regime, called Manager-in-Charge, in 2017. The MIC regime puts the onus of maintaining conduct standards on to senior managers. Akin to the UK’s Senior Managers and Certification Regime, appointed MICs are in charge of certain core functions, including operational control and review, risk management, anti-money laundering, and compliance.

The Monetary Authority of Singapore is set to release conduct risk guidelines to ensure senior management and employees fulfil their responsibilities and are incentivised appropriately. In June, it published a response to the Individual Accountability and Conduct consultation paper. The previous month, MAS formed a steering group with the Association of Banks in Singapore to promote better culture and conduct among banks.

“Financial institutions should design their incentive frameworks and structures to adequately consider behavioural and conduct factors, in addition to financial KPIs,” MAS wrote. “The compensation framework should provide a means, among other measures which the financial institution may have in place, to hold senior managers accountable for their conduct.”

Lee Alam, head of Allen & Overy’s Asia-Pacific consulting business and previously head of global regulatory affairs at Commonwealth Bank of Australia until April last year, says most individual accountability regimes will come with a requirement to change incentive structures for executives.

“The industry has taken a lot of steps to rebalance scorecards for executives and make sure they take account of more than just financial metrics; there will be much more on the non-financial side,” he says.

The concept of the balanced scorecard was proposed by Harvard Business School professor Robert Kaplan and consultant David Norton in 1992, and financial institutions began to adopt the concept more readily after the financial crisis of 2008. Most banks, including Maybank and DBS, review their scorecard on a periodic basis.

The disciplinary committee at Maybank considers misconduct incidents and regularly reviews whether the application of its balanced scorecard and other practices have any gaps.

DBS’s balanced scorecard is established and updated by the group management committee annually and reviewed by the board, which cascades specific goals down to all staff, according to Shee Tse Koon, Singapore country head at DBS. Feedback from employees is shown in a dedicated section in the annual staff survey about culture and conduct.

DBS says it has made its balanced scorecard more forward-looking to prepare for the unexpected in the years to come.

“The balanced scorecard has always been talking about a balance between financial goals with people, customers and risks. But we also have a balanced scorecard that looks beyond the current deliverables to getting ourselves future ready,” says Shee, who also chairs the Culture and Conduct Steering Group that has representatives from 13 banks, the Monetary Authority of Singapore and the Association of Banks in Singapore.

Beyond the scorecard. DBS also has a so-called fair dealing committee co-chaired by its group chief executive and Shee to discuss conduct and culture issues on a regular basis.

The purpose of these industry efforts is for banks to install consistent expectations of conduct that all employees are aware of.

“I can tell you all the rules and regulation. You can always derail knowingly or unknowingly,” says a Hong Kong-based senior operational risk manager at a global financial institution. “What an organisation must ensure is that there is a secured control process in place that is transparent enough to everyone – similar to what the regulators want.”

Correction, July 11, 2019: An earlier version of this article stated that Shee was head of DBS’s culture and conduct steering group. He is, in fact, head of the steering group formed by MAS and the Association of Banks in Singapore.

Editing by Alex Krohn

Model citizens

By Tom Osborn | Opinion | 8 July 2019

A spectre is haunting Europe – the spectre of model risk. Launched in 2016, the European Central Bank’s (ECB’s) Targeted Review of Internal Models (Trim) has forced a step-change in attitudes among European lenders towards ensuring their capital models are fit for purpose. In keeping with other regulators worldwide, the watchdog’s team of inspectors is visiting banks to check everything from internal governance processes to the data inputs that underpin modelling assumptions.

If the early evidence from the review is anything to go by, banks still have significant work to do to get their houses in order. The latest set of findings, on the safety and soundness of banks’ market risk models, landed in April – and made for grim reading. Of 30 banks that had been subjected to supervisory visits, the ECB found, on average, 32 issues with modelling practices – with, on average, nine issues deemed severe. 

The review is already proving costly to lenders – and not just from a compliance point of view: ABN Amro cited changes made to its modelling practices as driving a €1.3 billion jump in credit risk-weighted assets during the first quarter of this year – implying the regulator thought its models were not adequately gauging the credit risk in its loan portfolios previously, necessitating a top-up. 

For global lenders, Trim followed hot on the heels of the US Federal Reserve’s SR 11-7 guidance on model risk management (MRM) – published in 2011, though not enacted until 2012. Where Trim is, as the name suggests, targeted in scope, SR 11-7 is broad enough to capture anything that looks like a model within a bank, from a value-at-risk model to a simple spreadsheet-based factor model. 

In reality, of course, Trim was a politically motivated project – partly designed to keep pace with SR 11-7, but also to shore up confidence in the use of internal modelling among European watchdogs keen to have some collateral to back their pro-model stance during the final negotiations over Basel III. In the opposing camp were US regulators – distrustful of internal modelling practices in the wake of major failings revealed during the financial crisis, and preferring instead the use of revised standardised approaches where possible, as well as an output floor to bind internal model estimates to these. 

All of this has meant a compliance headache for banks, and a huge spend on hiring or redeploying quants from model development to risk management and validation teams. Quants don’t come cheap, nor do the army of consultants brought in to oversee the process. Sources tell tales of one US bank that attempted to lower costs by cutting as many PhD model quants as it could, and replacing them with master’s graduates – only to be red-flagged by its regulator. 

While some of the changes to validation practices have required quant upskilling, much of the change has been around people and processes – motherhood and apple-pie operational risk practices such as establishing independent oversight and effective challenge during the model development
and deployment phases. 

Anne-Cécile Krieg, deputy head of MRM at Societe Generale, notes that the mindset has shifted. All three lines of defence should be responsible for MRM; previously it tends to have been left to the second line of defence. Now there are specific roles allocated across the three lines and it is fully embraced and embedded. With MRM, there are a significant number of stakeholders in the first line of defence, including the designer of the model, the person implementing it, the users and those tasked with surveillance. Now, all of those roles are identified in the first line, with increasing emphasis on users and the model owner roles.

Model risk management – Special report 2019
Read more

Op risk data: losses decline sharply in first half

By ORX News | Opinion | 3 July 2019
US Congress building

Conduct losses account for most of $8.5 billion total. Data by ORX News

The largest operational risk loss by financial firms in June is an estimated 20 billion rupee ($287.6 million) fraud at Indian investment firm I Monetary Advisory. The specialist Islamic finance house, based in Bengaluru, is thought to have diverted investments from new customers to pay returns to existing investors, in a type of Ponzi scheme. By June 18, 40,000 complaints had been filed against IMA.

The company used investors’ funds for trading in gold, clothing and infrastructure. Investors were promised returns ranging from 2% to 4% within 45 days, and annual returns totalling 36%. Its founder and other directors are under investigation by local authorities.

The second largest publicly reported loss concerns Austria’s Erste Group and its activities in the Romanian mortgage market. On June 25, the bank made a provision of €230 million ($261.9 million) following a Romanian high court decision against its Banca Comercială Română subsidiary, say local reports.

The court ruled that BCR’s mortgage arm accepted deposits for a state-backed lending programme from ineligible candidates, namely those under 18 and over 65. The bank failed to tell customers to use their loans for the purpose stipulated in the contract and to request documents to justify the loans.

In third place, UK-based cryptocurrency trading company Control-Finance defrauded investors of bitcoins worth $147.0 million, say charges filed by US regulators. The firm tricked customers, including those in the US, into buying bitcoins from third-party vendors and transferring the assets to Control-Finance, where expert crypto traders would supposedly generate up to 45% monthly returns for customers. The US Commodity Futures Trading Commission says 1,000 investors stand to lose nearly 23,000 bitcoins in the fraud.

The fourth-largest loss is the $94.3 million that State Street must pay government agencies for overcharging clients for expenses related to custody services. The overcharges include an undisclosed mark-up that State Street added to the cost of sending financial messages through the Swift network, as well as asset pricing and valuation services from third-party vendors, audit reports and mutual fund client reports. Over 17 years, State Street charged investment firms $170 million more than necessary.

Finally, Danske Bank expects to pay Dkr400 million ($60.9 million) in compensation to customers who were charged overinflated fees for one of the bank’s investment products. Following the rollout of new Mifid rules for Europe’s financial markets, the bank revised the fee structure for its Flexinvest Fri product in 2017. The bank wrongly set the fees too high in relation to expected returns, rendering the product unsuitable for some customers.

As a result, Danske Bank dismissed Jesper Nielsen, its head of banking in Denmark. The fee slip and Nielsen’s departure come as Danske Bank reels from a €200 billion money laundering scandal.

Spotlight: Chilean banks face outsourcing failures

Banco de Chile, Banco Falabella and Santander were among 13 Chilean banks and credit card companies forced to block and replace customer debit and credit cards in June after an individual working for cash machine network provider Redbanc stole information relating to 42,000 cards. Banks will foot the bill for 82 cases of fraud totalling 23 million pesos ($34,000) reported by June 11.

Police discovered the theft – which Chilean media have described as the country’s largest data breach involving debit and credit cards – as part of a wider investigation into a card cloning network. The former employee had stolen the card data and card-reading equipment before duplicating the cards and attempting to guess their PINs.

Chilean senator Felipe Harboe Bascuñán says the data breach occurred because Redbanc lacked adequate security measures regarding its suppliers.

Among the banks forced to contact customers and block and replace cards are Banco de Chile, which blocked 9,000 cards, Banco Falabella (6,000) and Santander (1,000). Scotiabank, BCI and Banco Ripley were also affected.

Mid-year review: op risk losses fall sharply

Total op risk losses at financial firms fell during the first half of 2019, compared with the same period last year – continuing a trend of loss events declining in both frequency and severity for big banks in the last 18 months. ORX News recorded $8.48 billion in op risk losses between January and June, well down on the $25.6 billion in the first half of 2018. Even discounting 2018’s mega-loss at Chinese insurer Anbang, this year’s total has still declined by 40%.

If losses follow a similar pattern through the second half of this year, the financial industry could be on track to see 2014’s high water mark of $100 billion come down by more than four-fifths.

Misconduct continues to account for the majority of losses. In the first half of 2019, nearly three-quarters (74%) of the total loss amount was attributable to conduct-related events, including the single largest loss during the period. In January, General Electric announced a $1.5 billion settlement with the US Department of Justice over claims that its now-defunct subsidiary WMC Mortgage mis-sold residential mortgage-backed securities. From 2005 to 2007, WMC originated 250,000 subprime mortgages, which it sold to investment banks for $65 billion.

Despite this large payment – as well as an undisclosed settlement reached between Credit Suisse and New York state in January, and a $150 million settlement between Morgan Stanley and California state in April – the frequency of fines, provisions and settlements relating to mortgage-backed securities has decreased so far this year to three, compared with seven in each half of 2018 and five in each half of 2017, perhaps indicating that firms have now settled the majority of outstanding claims relating to crisis-era events.

Additionally, Australian firms paid and provisioned A$996.6 million ($701.1 million) in the first six months of 2019 for customer remediation due to misconduct, during which time former judge Kenneth Hayne published his much-anticipated and critical report following the Australian Royal Commission into financial services.

Financial firms reported 21 instances of data breaches in the first half of the year, affecting 909 million records as well as 4.2 million individuals. Despite cyber-related data compromise appearing at the top of risk managers’ agendas, cyber criminals were behind less than one-quarter (21%) of these events by frequency. For example, several events related to the accidental exposure of data on unprotected databases.

However, the single largest data breach was caused by a technology-related failure. Mortgage insurance specialist First American Financial revealed in May that 885 million mortgage files had been left open to unauthorised access. The files included bank account numbers and statements, mortgage and tax records, social security numbers and other personal data.

External theft and fraud continues to drive commercial banking losses, accounting for over half (52%) of events attributed to this business line so far this year. Most notably, the revelation of accounting fraud at South African retailer Steinhoff, confirmed by PwC in March, left at least 21 firms exposed to losses from loans to the company. Individual reported amounts include $370 million at Citigroup, $292 million at Bank of America, $273 million at JP Morgan, ¥14 billion ($129.8 million) at Nomura and $110 million at Natixis.

While conduct and cyber remain key risks for banks, one theme from the first half of 2019 is the increased regulatory focus on operational resilience. As Risk.net recently reported, the Basel Committee on Banking Supervision is in the process of developing a set of metrics for operational resilience.

Twenty-three instances of technology and infrastructure failures cost firms $11.6 million between January and June, highlighting the importance of sound outsourcing risk management. For example, ING Australia, Bank Australia, Australia Post and Bendigo Bank were hit by a five-hour outage at a fourth-party processor in June, affecting cash machine links and point-of-sale connectivity.

Editing by Alex Krohn

All information included in this report and held in ORX News comes from public sources only. It does not include any information from other services run by ORX, and we have not confirmed any of the information shown with any member of ORX.

While ORX endeavours to provide accurate, complete and up-to-date information, ORX makes no representation as to the accuracy, reliability or completeness of this information.

Big banks to bear brunt of Basel III reforms in EU

By Louie Woodall | Data | 2 July 2019

Systemically important European Union banks could see their minimum capital requirements surge by almost 29% under the fully loaded Basel III rules, a study by the European Banking Authority (EBA) has concluded.

The weighted average increase to the Tier 1 minimum capital charge across a sample of 189 banks was estimated at 24.4%. Yet, for half of the banks the projected capital uplift was less than 10.6% and the increase for small banks about 5.5%. One-quarter of the banks are projected to see their minimum requirements fall from current levels.

The Basel III reform that will have the single-largest impact on bank capital is predicted to be the output floor, which prevents modelled capital requirements falling below a proportion of the corresponding standardised requirements. This component is estimated to hike minimum requirements across the sample of banks by 9.1%, and for a subset of 104 large banks by 9.5%.

For the eight global systemically important banks (G-Sibs) in the sample, the uplift is pegged at 7.6%.

Changes to the credit valuation adjustment (CVA) charge will contribute 3.9% to the overall capital uplift and 5.1% to that for the G-Sibs. 

Higher operational risk charges are estimated to contribute 3.3%, and for G-Sibs 5.5%.

In contrast, small banks will be unaffected by the output floor and experience only a small increase to their minimum requirements for CVA. Operational risk capital is expected to drop for these firms, taking 3.7% off their current requirements. The revised standardised approach is projected to increase their minimum requirements by 10.7%, however.

The total capital shortfall that banks will have to make up to be compliant with the fully loaded Basel standards is estimated at €135.1 billion ($152.6 billion), of which €82.8 billion relates to the G-Sibs and €43.8 billion to 67 other systemically important institutions (O-Siis).

The aggregate ratio of total capital to risk-weighted assets for all banks in the sample would be 14.3% today if Basel III were in force, compared with the actual current ratio of 17.3%.

What is it?

The EBA conducted a Basel III impact assessment based on data from 189 EU banks as of June 2018. These included: 104 large banks, of which eight are G-Sibs and 67 O-Siis; 61 medium-sized banks; and 24 small banks. Together, they represent 85% of the total assets of the EU banking sector. 

The EBA study uses a different set of assumptions than those used for the regular Basel III monitoring exercises

Why it matters

The EBA’s assessment confirms findings, based on December 2017 data, that the Basel III capital wallop will land hardest on the biggest EU banks. But the effect of the output floor is now predicted to be even greater. Previously, the measure was estimated to raise minimum requirements for the G-Sibs by 5.4% on end-2017 levels. The new study concludes the uplift will be more than 200 basis points higher.

This is a blow to capital model fans as it demonstrates that the output floor will restrict the sort of capital savings possible in relation to the standardised approach. The use of models for operational risk capital has already been scrapped. Could banks, even the very largest, choose to junk their internal ratings-based models for credit risk as Basel III approaches, in the belief that they are just not worth the bother under the new regime? Only time will tell.

Get in touch

Sign up to the Risk Quantum daily newsletter to receive the latest data insights

Any surprises from the EBA’s analysis? We’ll be poring over the watchdog’s findings in more detail in coming days and would welcome your feedback. Email louie.woodall@infopro-digital, or send a tweet to @LouieWoodall or @RiskQuantum. You can also get in touch via LinkedIn.

Tell me more

Basel III: EU G-Sib capital requirement to jump 25%

Revised Basel output floor to bind 41% of European banks

Basel III changes set to create big winners and losers

View all regulator stories