Is operational risk regulation forward looking and sensitive to current risks?

By Marco Migueis | Technical paper | 15 November 2018

The disputed terrain of model risk scoring

By Steve Marlin | Features | 15 November 2018

There is no concord on how banks should police their model risk. But two Fed economists have an idea

As banks lean ever more heavily on models – for pricing, risk, capital and other vitals – their boards are demanding a clear view of exactly how much risk those models entail, and how they may be abetting or denting the bank’s financial position. But there is no clear path on how to deliver that.

“We all have different techniques,” says the head of model risk management at a US G-Sib. “Everybody is not using the same approach, but everyone has an approach.”

Into this wilderness have stepped two Federal Reserve economists, with a method that includes both numerical, or quantitative, measures and more subjective, or qualitative, ones to create an aggregate risk score for ‘families’ of models.

And their proposal – not yet a paper – has become a bit of a punching bag for other experts with their own ideas on the subject.

We all have different techniques. Everybody is not using the same approach, but everyone has an approach

Head of model risk management at a US G-Sib

“I don’t mean to trash them,” says a senior modelling expert at a large US bank. “It’s well meaning, but I’m doubtful how practical it is.”

The job of assessing models – used for things like capital planning, balance sheet management, and measuring exposure to market risk, credit risk, and operational risk like cyber defence – is a many-tentacled affair. A variety of things come into play: fallible human opinion; the degrading of models over time; new datasets and much else. Models can work together, interlocking like inverted staircases in an Escher drawing, making their assessment even trickier.

“A majority of banks are struggling with how to quantify model risk,” sums up the head of model risk at a European bank.

In the US, the Federal Reserve stipulates in SR 11-7 that model risk should be understood “not just for individual models, but also in the aggregate”.

But the Fed has remained quiet about how this should be done. So companies are feeling their way blind, gathering statistics, like the number of models being reviewed or approved, or the number that perform poorly, as crude measures of aggregate model risk.

The US Federal Reserve
Federal Reserve

Although some banks claim to have solved model risk aggregation, there is no agreed-upon standard, either in the industry or among regulators.

“The banks aren’t there,” says the senior modelling expert at the US bank, speaking of aggregate model risk. “The supervisory guidance talks a lot about model risk, but the industry is still trying to get there.”

Two Fed economists, one big idea

Stepping into this fray, two Fed economists, Ray Brastow of the Federal Reserve Bank of Richmond and Liming Brotcke of the Chicago Fed, have come up with a way to assign a risk score to model families. They’ve based their approach on observations gleaned during supervisory reviews – a bird’s eye view of industry-wide risk-modelling efforts. (The two underscore that the approach is their own, and does not represent Fed policy.)

They use two quantitative measures: a “model robustness index”, which measures the risk of a model at its inception – that is, the risk that the model just won’t work – and a “model stability index”, which measures model risk once it’s in operation. The indices can be added up for a numerical measure of risk within families of models.

The banks aren’t there. The supervisory guidance talks a lot about model risk, but the industry is still trying to get there

Senior modelling expert at a large US bank

In using a number to size risk they hope to move away from opinion, or qualitative, measurements in favour of something firmer – an effort they haven’t seen much in their supervisory forays.

Brastow says it’s clear risk cannot simply be added up across businesses to come up with a single encompassing number. “But it’s also difficult using non-numerical ways to assess risk,” he adds. “All we’re suggesting is there are ways to add discipline to the aggregation of model risk.”

To construct indices for model robustness and stability, the Fed researchers propose assigning weights to the various factors that determine model risk. The right selection of statistics and weights is crucial to building good indices, they note.

Once weights and risk factors have been selected, models can be graded numerically on robustness (at the beginning of deployment) and stability (after they’ve been put into production). Models can be further classified as low, medium or high in both robustness and stability.

For example, on a scale of zero to 100, models whose robustness or stability scores are under 60 could be classified high-risk, those between 60 and 85 as medium-risk, and those from 85 to 100, low-risk.

Within a model family, the scores could be aggregated along with other factors such as model complexity and financial impact to derive an overall aggregate model risk measure.

Everyone’s a critic …

Model risk experts at banks briefly praised the Fed economists’ approach as a stalwart try at addressing a thorny problem. Then some of them sandbagged it.

“An oversimplification,” the US G-Sib executive calls it, adding that its failure to emphasise data quality is “a fundamental flaw”.

For instance, he and others say the example used in the Fed presentation was a model used to predict the likelihood of default on home mortgages. But that approach isn’t suitable for more sophisticated applications such as derivatives pricing, they say.

The choice of weights is crucial to coming up with model risk scores; some experts would have liked to see more attention paid to how these very subjective elements are selected.

“Before you can aggregate model risk, you need to look at individual model risk components,” says Peter Quell, head of portfolio analytics for market and credit risk at DZ Bank in Frankfurt. “The only problem is the weights you are assigning to the different indicators are artificial.

“There is no rational way to derive these weights. It is always up to judgement. But if you want to condense everything into one single number, somehow you need to come up with these weights.”

DZ Bank headquarters, Berlin
Connie/Flickr

Another issue was data quality, or the lack thereof, in measuring model risk, especially in pricing illiquid assets. A good model blighted by bad data is no better than a flimsy model with good data.

“Data is not considered here,” says the head of model risk at the US G-Sib, of the Fed researchers’ approach. “You can have all the other parameters like statistical significance and stability, but terrible data. That isn’t captured.”

The Fed researchers note that although some banks have taken approaches similar to theirs, those attempts have fallen flat, says Brastow. When pressed about their approaches during supervisory reviews, banks say they err on the side of conservatism by adding a ‘capital buffer’, a fudge factor intended to compensate for any errors in the models. How these amounts are determined is necessarily arbitrary.

“We’re not aware of any banks that have gotten very far,” says Brastow. “When we press banks about their capital charges for model risk, they’re evasive.

“A lot of banks hold extra capital for model risk, a buffer. But if you ask them how they size that, they say: ‘We’re trying to be conservative.’ They don’t have a good way of knowing when it increases or decreases.”

Home brews

Left to their own devices, banks have been confecting their own model risk tests. The aggregation efforts that are furthest along have employed a mix of qualitative and quantitative components.

A US subsidiary of a large international bank, for instance, is developing a process for aggregating model risk that identifies the relevant parameters for individual models, and then extrapolates those to account for linkages between models.

At the individual model level, the bank’s independent validation team tracks qualitative characteristics: complexity, for instance, whether on a traditional model in a mature modelling area or one that uses advanced techniques, like machine learning.

These and other metrics are tallied in the initial validation review and used to develop a ‘model risk scorecard’ for each model. Each item in the scorecard is weighted to arrive at a total score that can then be added up for all the models in a family to come up with an aggregate risk score.

The bank’s US head of model risk management says that he looks “at all the individual parameters that are highlighted in individual scores to create a total score for families of models or similar types of models.”

The bank plans to complete its model aggregation project early next year, and present the results to the business lines and senior risk managers, as well as the board risk committee. The system will inform senior management on the residual risks the bank is taking, and will help ensure compliance with SR11-7, though this is not the main objective, he says.

“The bigger benefit is not adherence to SR 11-7,” says the head of model risk management. “It’s more about getting a better handle at articulating risk and reflecting that back into a business-as-usual assessment process.”

The scorecard concept used by the Fed researchers is also similar to one outlined in a 2015 paper by Michael Jacobs Jr, then a consultant at Accenture and currently a quantitative analytics expert at PNC Financial Services. Brastow says that of all the theoretical approaches he’s aware of, the Jacobs paper comes closest to his and Brotcke’s approach, although it doesn’t go as far in developing a quantitative method.

Jacobs suggests dividing model risk into two categories: inherent model risk and risk mitigation.

The first category, inherent model risk, is assigned a score based on the model’s complexity, uncertainty, availability of data and other factors. The second, risk mitigation, is scored based on criteria such as model validation, performance monitoring, benchmarking and backtesting.

An aggregate score is derived by multiplying, for each model in a family, its risk score by a ‘risk weight’ and totalling the results.

The Jacobs model has advantages: It’s relatively easy to put into motion and it’s comprehensible to senior management. On the minus side, it is primarily qualitative, and hence its scores, weights and overall results are subjective. It also doesn’t capture model interdependencies.

But why aggregate at all?

I don’t believe in aggregating model risk across different model types. For example, aggregating the risks associated with pricing models, risk models and retail models might not make sense

Slava Obraztsov, Nomura

Yet others reject outright the idea of trying to peg risk across different types of models.

“I don’t believe in aggregating model risk across different model types,” states Slava Obraztsov, global head of model validation at Nomura in London. “For example, aggregating the risks associated with pricing models, risk models and retail models might not make sense.”

Nomura evaluates its models during an initial validation stage and after the model goes into operation, the bank refines these numbers based on model performance and aggregates model risk narrowly, within specific model types, like pricing or risk models.

“It is more appropriate to take large groups of models and try to aggregate model risk across the individual models in those groups,” says Obraztsov.

Obraztsov, among others, suggested the Fed researchers, in their zeal to come up with a way to aggregate risk, ignored or oversimplified model risk at the individual level.

“Model risk quantification on an individual model basis is a difficult exercise,” says Obraztsov. “Nomura has implemented well-developed methodologies, and due to the complex nature of what we are trying to achieve, the approach is not simple.”

Obraztsov says the problem with the Fed researchers’ methodology is that it spends too much time on how to aggregate model risk, without addressing its purpose. The approach comes up with an isolated measure without tying that into the bank’s overall risk appetite and capital, he says.

“Quantification of model risk is what could be a firm’s loss because of model limitations. I’m not sure they’re addressing this point,” he says.

For example, the output of a model risk aggregation could be expressed in numerical terms as an amount at risk, say $100 million. But that figure needs to be viewed in the context of the firm’s economic capital: For some firms, that might be a huge amount, for others, it could be insignificant.

“In our reporting of quantitative measures of model risk, we don’t just report potential losses in terms of specific numbers,” says Obraztsov. “We always compare those numbers with available capital, whatever threshold is determined by the board of directors. That metric is much more meaningful than actual potential loss.”

A work in progress

To be fair, the Fed researchers acknowledge the limitations of their approach. They note, for example, the subjectivity in selecting appropriate statistics and weights in constructing the indices.

They also note that a single measure of model risk across families was never the objective, and indeed is not even expected by SR 11-7. Instead, supervisors expect banks to employ multiple measures of model risk such as performance, robustness and stability.

The researchers have a number of other caveats on their approach. The number and types of models used by large institutions make it difficult to establish a uniform approach for aggregating model risk. The authors admit that they haven’t yet tried to simulate an actual bank’s use of models, and that the approach needs to be tested in real-world situations. In addition, it does not take into account the interdependence of models and networking effects on model risk.

“We’re not suggesting that this is a panacea to measure model risk,” says Brastow. “These are imperfect measures.

“However, by quantifying, you’re forced to think more deeply about where model risk comes from. If you’re adding up the number of criticised models, that’s inherently an ad hoc judgement. What we’re suggesting is this approach might create some consistency.”

We’re not suggesting that this is a panacea to measure model risk. These are imperfect measures

Ray Brastow, Federal Reserve Bank of Richmond

The two researchers have presented their idea at conferences, and plan to publish a paper early next year. Still, the fact that the approach isn’t quite ready for prime time does not negate the need to quantify model risk both at the individual and aggregate levels. Boards and risk committees need to have insight into the risks presented by models, upon whose accuracy the livelihood of the business turns.

Indeed, SR 11-7 stipulates that senior management is responsible for reporting on model risk to the board, both at the individual and aggregate levels.

Given the limitations of the Fed model, banks are not likely to settle on a quantitative measure of aggregate model risk anytime soon, nor are model risk teams likely to devote the resources needed to nail one down. Indeed, the senior modelling expert notes that independent model risk teams are having trouble retaining talent, face strained budgets and are under regulatory pressure.

And in the end, model risk is a thankless job. Much of it comes down to saying no to models a company has sunk large amounts of money into. Against those hopes, it can be hard for a person to remain impartial.

“When you’re talking about independent model validation, it’s tricky to maintain independence while getting buy-in from the rest of the organisation,” says the risk modelling expert. “People want to do the right thing, but it’s a challenge organisationally.”

As for the dreamed-of, shining single score of risk, it’s for another day. Coming up with a single score for risk for all models within a ‘model family’, though, should be a high priority for banks, says Brastow.

“Aggregating model risk is difficult. We don’t think anybody does it well, but firms need to do it,” he says. On his own approach, he is grounded. “Adding these indices has value, but we’re not suggesting this is the only way to do it.”

EU G-Sibs cut $14 billion in op risk

By Alessandro Aimone | Data | 12 November 2018

The eight European global systemically important banks (G-Sibs) shed an aggregate $14.3 billion of operational risk-weighted assets in the third quarter of the year.

Banco Santander posted the largest decline – at 7% – with operational RWAs falling to $70 billion from $75 billion in the second quarter. Nordea and Societe Generale reported the smallest decline – at 0.5% – with op RWAs down $0.1 billion and $0.3 billion, respectively.

While none of the banks sampled posted higher op RWAs over the period, HSBC and Standard Chartered were the only two banks to remain flat on the quarter, at $93 billion and $28 billion, respectively.

The reductions are more dramatic on a whole year view. In the 12 months to end-September, the G-Sibs have cut $35 billion of op RWAs (6%). UniCredit has made the most progress reducing its exposure over the year, with a cut of 19%, or $8 billion. Deutsche Bank has slashed the most in dollar terms, with a reduction of $12 billion (10%).

BNP Paribas and Santander reported higher op RWAs on a year ago – of 11% and 0.5%, respectively.

What is it?

RWAs are used to determine the minimum amount of regulatory capital that must be held by banks. This minimum is based on a risk assessment for each type of bank asset. The riskier the asset, the higher the RWA, and the greater the amount of regulatory capital required.

Existing Basel Committee rules allow op RWAs to be calculated under the advanced measurement approach (AMA) using banks’ own internal models. A standardised approach and basic indicator approach are available for those businesses a bank is unable to cover with the AMA.

At end-2017, the committee scrapped the AMA and replaced it with a standardised measurement approach, under which firms will have to calculate their operational risk using the standard-setter’s own formulae. The SMA will be phased in from January 2022.   

Why it matters

Similar to their US peers, the downward trend across European G-Sibs seems rather clear, especially when looking at the 12-month horizon.

There could be multiple drivers for this. Fewer large op risk losses in recent years would almost certainly have had an impact on banks’ model outputs, since these largely depend on bank’s own historical losses, as well as those of their peers.

Updates to internal models could also have played a role, with changes to the parameters of the stress scenario-driven components of the AMA sure to have an effect.

However, a recent Risk Quantum analysis showed the share of European banks’ op risk calculated using internal models is shrinking, with BNP Paribas and Barclays setting their requirements to the level of the standardised approach this year. It is worth point out that the former saw an increase of its op RWAs, while the latter a decline.

We can’t quite explain this divergence, but we will track how banks’ op risk will fluctuate in the coming months, especially as our expectations are that more firms will migrate to the standardised approach in anticipation of the AMA’s retirement.

Get in touch

What are your thoughts on the state of operational risk across large European banks? Let us know by sending an email at alessandro.aimone@risk.net, or send a tweet to @aimoneale or @RiskQuantum.

Tell me more

Goldman, Wells cut operational risk

European banks junk op risk modelling

Top European banks shed $32 billion in op risk

View all bank stories

Goldman, Wells cut operational risk

By Louie Woodall | Data | 9 November 2018

The largest US banks shed $11 billion of operational risk-weighted assets (RWAs) in the third quarter, with Goldman Sachs and Wells Fargo trimming the most.

Total op RWAs across the eight US global systemically important banks (G-Sibs) hit $1.86 trillion at end-September. Wells Fargo reduced its amount by $10 billion (3%) to $319 billion, and Goldman Sachs by $5 billion (5%) to $108 billion from end-June.

Morgan Stanley, State Street and BNY Mellon all made smaller op RWA savings of $849 million, $151 million and $238 million, respectively. 

In contrast, JP Morgan and Citigroup increased their op RWAs in the third quarter by $3.9 billion and $1.1 billion, respectively. Bank of America’s total was flat at $500 billion. 

Year-to-date, Goldman Sachs has reduced op RWAs by the most among the G-Sibs, by $9 billion (8%), and then JP Morgan by $8.6 billion (2%). 

What is it?

US banks use the advanced measurement approach (AMA) to quantify their op RWAs and associated capital charges. 

This approach uses the frequency and severity of past op risk losses to determine how much capital should be put aside to absorb potential future losses. 

Each bank’s exposure is modelled using scenarios incorporating several different types of operational failure, as well as internal and external actual loss experience. Updates to the loss experience inputs can cause the resulting op RWA amounts to vary dramatically. For example, if a large regulatory fine is incurred during one quarter, it may result in higher reported op RWAs at the end of that reporting period.

Why it matters

Following years of ever-growing op risk capital requirements for US banks, it appears they may start, albeit slowly, to reverse direction. 

Why? The reason is simple: banks are incurring fewer, and less dramatic, op risk losses. Morgan Stanley’s most recent 10-Q filing states its decline in op RWAs “reflects a continued reduction in the frequency and magnitude of internal losses utilised in the operational risk capital model related to litigation and execution and processing”.

Similar effects may be nudging the totals of Goldman and JP Morgan lower too, although quarterly volatility in all banks' op RWAs will persist as updates to the loss data are fed into their models. 

Yet not all banks are on the same trajectory. Yes, Wells Fargo trimmed its op RWAs in the third quarter, but compared to end-2017, its total is up almost $20 billion in response to conduct and oversight failures.

Get in touch

Is the only way down for US op RWAs? Lend us your thoughts in an email to louie.woodall@infopro-digital.com or by tweeting @LouieWoodall or @RiskQuantum.

Tell me more

Has op risk capital peaked for US banks?

Share of op risk RWAs at US banks falls

JP Morgan cuts op risk RWAs by $12.5 billion

View all bank stories

Op risk data: conduct risk losses top $600 billion since 2010

By Risk staff | Opinion | 7 November 2018

Swift hack targets State Bank of Mauritius; Capital One, Mashreq, hit by AML fines. Data by ORX News

The largest operational loss in October was a $205 million settlement reached between Aegon subsidiary Transamerica and a class of US life insurance policy-holders. The insurer allegedly increased the monthly deduction rate adjustment on its universal life insurance policies between 2015 and 2016 to recoup past losses. The lawsuit began in early 2016.

According to the court, increasing the monthly deduction rate to recoup past losses constituted breach of contract. Transamerica had notified policy-holders in 2015 that it would increase charges by up to 38%. The insurer agreed to pay $195 million into a fund, which will benefit past and current policyholders, and $10 million towards the claimants’ attorneys fees.

In second place, the Office of the Comptroller of the Currency fined Capital One $100 million after finding the bank’s anti-money laundering programme under the Bank Secrecy Act to be deficient due to an inadequate system of internal controls and ineffective testing. Consequently, Capital One had failed to conduct customer due diligence or identify and report suspicious activity.

The fine followed a July 2015 consent order requiring Capital One to improve its AML risk assessment, policies and procedures, and audit programme, with which Capital One failed to comply. This is the largest of four AML fines issued by the OCC to US banks this year.

The third-largest loss also involved AML failures. The New York Department of Financial Services fined Dubai-based Mashreqbank $40 million for AML violations in its US dollar clearing operations, through which it provides services to clients in South-east Asia, the Middle East and Northern Africa. According to the NYDFS, the bank had insufficient procedures and resources for investigating suspicious activity alerts. Additionally, Mashreqbank used a third-party vendor to validate its transaction monitoring rules, but the vendor’s work was found to be deficient.

Similarly to Capital One, the fine follows a joint review by the NYDFS and the Federal Reserve Bank of New York conducted in December 2017, which found that the bank had failed to address shortcomings identified by regulators in June 2016.

Fourth, Navy Federal Credit Union agreed to pay $24.5 million to settle class action claims it unfairly charged overdraft fees in relation to its Optional Overdraft Protection Service. According to the lawsuit, the wording in Navy Federal’s terms meant the union would only charge an overdraft fee when an account held insufficient funds to cover a transaction. However, between 2012 and 2017 union members were charged a fee on transactions that were authorised against a positive available balance.

Finally, the Ukrainian banking regulator announced in October that Avant-Bank employees had improperly issued loans totalling 602 million hryvnia ($21.1 million) to companies affiliated with the bank, shortly before it was declared bankrupt. The regulator alleges that the employees breached basic lending principles and failed to ensure the creditworthiness of the companies. The companies reportedly declared themselves bankrupt after receiving the funds, and representatives of the bank attempted to cover up the scheme by destroying evidence of the loans.

Story spotlight: State Bank of Mauritius in $4m cyber loss

On October 2, hackers targeted the Swift payment systems of State Bank of Mauritius’s Indian operations, resulting in a potential loss of up to $4 million. Initially, the bank reported that $14 million had been fraudulently transferred, but reduced this figure following recovery efforts. The bank said customer accounts were not targeted.

In response to the attack, SBM initiated a cyber security review of its Indian operations and informed relevant authorities. According to ORX News data, banks in India have suffered total losses of $41.6 million and one data breach this year as a result of cyber attacks.

In focus: Conduct risk

Conduct risk is a long-running concern for operational risk managers. Since the financial crisis, authorities worldwide have ramped up their focus on misconduct, punishing financial institutions severely for any shortcomings. Ensuring fairness in how firms treat customers and deal with the market is as much a preoccupation of regulators now as corporate governance.

Despite its importance, there is a lack of industry best practice on measuring and managing conduct risk. Not all regulators provide a definition, and those that do may differ from others. This can pose problems for financial firms that want to establish or improve their conduct risk management frameworks.

One aspect of a good risk management framework is the ability to identify past events. As a simple proxy for misconduct-related events, ORX has combined three existing loss event categories: “clients, products and business practices”, “internal fraud” and “employment diversity and discrimination”. This is not intended to be a definition of conduct risk, but is only a guide to use within existing data.

Applying this event type combination to loss data shows an interesting picture. Since 2010, global fines and settlements related to misconduct have reached almost $607 billion. The years with the highest losses are 2011 and 2012, when enormous fines for mortgage-backed securities, PPI, and benchmark manipulation began to surface*. Since then, conduct-related losses have seen a general decline.

So far in 2018, however, conduct-related losses have shown a slight variation in the trend. As of October, ORX News has recorded over $10 billion more in conduct-related losses this year than last year.

It’s impossible to know if this uptick signifies the renewal of a trend to higher conduct risk losses. What is clear, however, is that financial regulators worldwide continue to scrutinise conduct and culture. The costs of this focus are not only financial; reputation can be severely impacted by high-profile conduct risk events too.

One example of this focus is the Australian Royal Commission probe into the country’s financial sector. Throughout 2018, Australian banks and insurers have attended hearings which exposed numerous conduct issues, including problems such as continuing to charge dead superannuation customers. In addition to the severe reputational damage, banks have set aside hundreds of millions to cover compensation payments. By 2020 the four largest Australian banks – ANZ, CBA, Macquarie and NAB – will have paid an estimated A$1.3 billion ($0.95 billion) in fines and remediation, as well as A$1.1 billion for new compliance programmes, according to Morgan Stanley.

While a review of the misconduct of a country’s entire financial industry is uncommon, the outcomes of the Royal Commission so far serve as a reminder that misconduct is costly, in more ways than one, and is not going away any time soon.

*Due to the way ORX News records legacy events, fines or settlements relating to the same event are grouped into one loss. Fines and settlements for crisis-era misconduct are still taking place, but show in the first year a loss for that event was first recorded.

Editing by Alex Krohn

All information included in this report and held in ORX News comes from public sources only. It does not include any information from other services run by ORX and we have not confirmed any of the information shown with any member of ORX.

While ORX endeavours to provide accurate, complete and up-to-date information, ORX makes no representation as to the accuracy, reliability or completeness of this information.

Predictive fraud analytics: B-tests

By Sergey Afanasiev, Anastasiya Smirnova | Technical paper | 15 October 2018

Wells Fargo cuts $24 billion of RWAs

By Louie Woodall | Data | 12 October 2018

Wells Fargo crushed risk-weighted assets in the third quarter, shaving billions off its end-June total as the bank continues to navigate the asset cap imposed by the Federal Reserve. 

End-September RWAs were $1.25 trillion, down 2% from $1.28 trillion the previous quarter. The reduction in RWAs outpaced the cut in total average assets, which fell just $8.6 billion (0.5%) to $1.88 trillion. 

The RWA decrease largely offset the dampening effect on the bank’s core solvency ratio caused by a drop in its common equity Tier 1 (CET1) capital, which was depleted following the return of $14.5 billion to shareholders over the past three months through dividends and stock buybacks. 

CET1 capital stood at $149 billion at end-September, down 3% from $153 billion the prior quarter. The CET1 capital ratio, however, declined just 10 basis points in the third quarter, to 11.9% from 12% at end-June.

Since end-2017, Wells Fargo has pared RWAs and total assets by 3%, or $33 billion and $59 billion, respectively.  

Wells Fargo is barred from growing above $1.95 trillion in assets by a Federal Reserve order issued in response to the lender’s “ghost account” scandal – in which bank employees opened hundreds of thousands of deposit and credit card accounts without customers’ permission. 

What is it?

RWAs determine the minimum amount of regulatory capital that a bank must hold, based on a risk assessment for each type of bank asset. 

The riskier the asset, the higher its RWA value and the more capital needs to be held against it.  

RWAs can be calculated using a regulator-set standardised approach or a bank’s own internal models. 

Why it matters

The drop in RWA allowed Wells Fargo to reward shareholders without denting its capital ratio. Management called this "optimisation".  

It could be that the bank shed assets with high risk-weights, or that the credit quality of its loan portfolio improved, affecting the inputs used to determine their RWA value. Then there’s the make-up of its trading portfolio, which could have run lower risk in the third quarter relative to the second. 

Forensic details on where the savings were made will be found in the bank’s quarterly regulatory filings and Pillar III disclosures, which will be published later this month. All that can be said for now is that Wells Fargo is continuing to manage to its assets cap while maintaining its pre-2018 balance of risk and capital.

Get in touch

We're interested in learning more about Wells Fargo's RWA optimisation efforts. If you have the inside story and want to share, get in touch at louie.woodall@infopro-digital.com or tweet @LouieWoodall or @RiskQuantum.

Tell me more

Wells Fargo sheds low risk assets

Wells Fargo cuts deposits to meet Fed order

View all bank stories

How to upgrade your first line of defence

By Alex Hurrell | Advertisement | 11 October 2018

The panel

  • Christophe Delaure, Senior product manager, IBM
  • David Canter-McMillan, Vice-president, Function head of operational risk, Federal Reserve Bank of New York
  • Kevin Krueger, Vice-president, Markets, Federal Reserve Bank of New York
  • Anna Hardwick, Chief control officer, Global operations, HSBC
  • Moderator: Alec Campbell, Divisional content editor, Risk.net

Front-office control functions may be still be in their infancy, but the importance of the front office in the broader control framework was recently reinforced in guidance from the Federal Reserve Board and the UK Financial Conduct Authority.

Defining the first line of defence and its responsibilities remains a challenge even within institutions employing the three lines of defence (3LoD) model. For newly created control functions in the first line of defence, there remains a focus on conduct, authorisation and demonstrating end-to-end control of sales, trading, investment banking and client-facing activities.

There is a general sense in the industry that, due to improvements in technology and greater automation and the size of the largest global banks, some of these improvements should be sped up.

Key topics discussed in this webinar include:

Op risk data: SEC issues first fine under cyber risk rule

By Risk staff | Opinion | 10 October 2018

SocGen provisions for sanctions violations; has the SMR prompted more bank CEO resignations? Data by ORX News

In the largest publicly declared operational risk loss from September, Societe Generale provisioned €1.1 billion ($1.28 billion) to cover penalties it expects to receive from the US authorities over sanctions violations. SocGen is being investigated for alleged breaches involving Iran, Cuba and Sudan in 2014.

The investigation involved the Department of Justice and the Treasury Department, as well as federal and New York state attorneys, the Federal Reserve and the New York Department of Financial Services. On September 3, SocGen said it had entered a more active phase of discussions with US authorities and expected to reach a resolution in September 2018 – although no resolution has yet been publicly reported.

In second place, ING paid €775 million to settle allegations it violated anti-money laundering regulations. This settlement is the second-largest AML loss recorded in the ORX News database, excluding sanctions losses. The Dutch public prosecutor found that ING had insufficiencies in its internal policies and had participated in culpable money laundering. Specifically, between 2010 and 2016, ING allegedly failed to prevent the laundering of hundreds of millions of euros due to shortcomings in its client due diligence policy.

According to the prosecutor, for a number of years ING lacked focus and awareness of its client due diligence obligations. It also said ING had prioritised commercial objectives over compliance, failed to implement long-term improvements, had dysfunctional and fragmented controls and a deficient escalation culture.

The third-largest loss was a settlement of $250 million paid by insurer State Farm to settle allegations it had rigged the election of an Illinois high court justice to overturn a $1 billion judgment against the firm.

State Farm was ordered to pay out $1.19 billion in 1999, after a class of customers claimed the insurer had replaced their crashed car parts with generic rather than branded parts. The amount was reduced on appeal to $1 billion, and in 2005 was thrown out completely after the election of Lloyd Karmeier to the court.

The class claimed that State Farm had paid $3.5 million to Karmeier’s election campaign because of his sympathy for tort reform. The class sought $1 billion in damages and $1.8 billion in interest, which could have been tripled under the Racketeer Influenced and Corrupt Organisations Act if successfully prosecuted. State Farm did not admit to any liability or wrongdoing as part of the $250 million settlement, which has a final approval hearing scheduled for December.

In the fourth-largest loss, Punjab National Bank has been allegedly defrauded of 5.39 billion rupees ($74.2 million) in loans by a telecommunications and power equipment manufacturer between 2013 and 2014. The alleged loss appears to be far from an isolated case: earlier this year, the Indian bank was the subject of intense media attention after it revealed a massive $2.23 billion letters of undertaking fraud by diamond businessman Nirav Modi in May. In September of last year, it was one of a number of banks caught up in the 50 billion rupee loan fraud allegedly perpetrated by Kingfisher Airlines founder Vijay Mallya – the seventh-largest publicly declared op risk loss of 2017.

Lastly, hackers stole $59.6 million worth of cryptocurrency from Japanese cryptocurrency exchange operator Tech Bureau in just two hours on September 14 after breaching a hot wallet – cryptocurrency storage that is connected to the internet. Tech Bureau plans to refund all affected customers.

Story spotlight: Voya Financial pays $1 million under SEC cyber rule

On September 26, fund manager Voya Financial agreed to pay $1 million to the Securities and Exchange Commission after hackers impersonated three Voya Financial independent contractors and gained access to the personal information of 5,600 customers. It was the regulator’s first enforcement of its 2013 identity theft red flags rule, which requires firms to have written procedures in place that could highlight attempted identity thefts.

According to the SEC, the hackers phoned Voya’s technical support line in April 2016 and pretended to be the independent contractors requesting a password reset. Despite informing staff not to provide usernames or password resets over the phone following the first attempt, the hackers successfully impersonated contractors twice more.

The hackers could then access customer information including addresses, dates of birth and last four digits of social security numbers. Voya neither admitted nor denied the SEC’s findings.

In focus: is the SMR behind bank CEO departures?

September saw a spate of high-profile resignations following major operational risk events. TSB’s Paul Pester, Danske Bank’s Thomas Borgen and ING’s CFO Koos Timmerman all stepped down in response to IT and anti-money laundering failures, respectively.

Those aren’t the only cases in 2018. Earlier this year, Australia had two high-profile resignations after Commonwealth Bank of Australia chief Ian Narev stepped down following the bank’s AML crisis and Craig Meller of fund manager AMP resigned in the wake of revelations from the Royal Commission into conduct in the financial industry.

On the face of it, it would be easy to conclude from this that banking executives are increasingly resigning after major operational risk events; ORX News examines the data to determine if this is really the case.

Accountability does not just rest with the CEO, says the FCA

There has certainly been a shift in the conversation around accountability. The UK Financial Conduct Authority’s Senior Managers Regime, introduced in 2016, formalised the concept that although a senior manager may delegate tasks, they cannot delegate the responsibility for the outcome. Since then, Australia, Hong Kong and Singapore have adopted or started to adopt similar schemes. The Irish central bank has called for more accountability for senior managers, and in the US the Department of Justice has increased its focus on pursuing executives, while the Federal Reserve is updating its risk rating scheme for banks.

All of this comes in the context of increased regulatory scrutiny of conduct issues, a growing focus on culture, and continued public distrust of banks – much of it a legacy of the financial crisis that sees the public still questioning how bankers “got away with it”.

On examining the data, however, there is no clear trend of increasing resignations.

The first thing to clarify is that it is rare for a CEO to step down as a result of breaches that happened outside of their tenure, even though they may often have been in senior positions during this time. Of those chief executives who departed this year, all had held their role for at least two years of the period where wrongdoing was happening. In fact, that trend holds true for all departures in the last six years.

With those taken out of the equation, the picture is mixed. For example, Barclays’ Bob Diamond resigned over Libor allegations in 2012, and so did Rabobank’s Piet Moerland in 2013. But other CEOs have not.

This raises the question: should a CEO resign if a major event happens under their watch? Each individual case will be unique. Ultimately, is it up to a bank’s board whether a CEO stays or goes. If their competency and ability to run the firm outweigh any financial or reputational damage, there may be no clear business reason to resign. But if investor, media or regulator reactions are sufficiently negative there may be no choice.

One part of the SMR and similar schemes is that accountability does not just rest with a bank’s head employee. A shift in the culture of accountability should devolve personal responsibility through the ranks of senior and middle managers, preventing situations that create the need for a high-profile resignation.

All information included in this report and held in ORX News comes from public sources only. It does not include any information from other services run by ORX and we have not confirmed any of the information shown with any member of ORX.

While ORX endeavours to provide accurate, complete and up-to-date information, ORX makes no representation as to the accuracy, reliability or completeness of this information.

Output floor to constrain almost half of G-Sibs – Basel study

By Alessandro Aimone | Data | 8 October 2018

Output floors on modelled capital requirements will become the binding regulatory constraint for almost one in two global systemically important banks (G-Sibs) once the Basel III rules are fully implemented, up from around just one in four today, a study by the Basel Committee shows. 

The committee estimates that the Basel III output floor – which forces internal model banks to hold minimum capital equivalent to 72.5% of the amount generated by the revised Basel III standardised approach by 2027 – will impose the single largest Tier 1 capital requirement on 46% of G-Sibs. The current Basel II transitional floor, which is set at 80% of standardised risk weights, is the biggest capital burden for just 27% of G-Sibs. 

Of all ‘Group 1’ banks – internationally active banks with more than €3 billion in Tier 1 capital – the Basel III output floor will constrain 30% of firms, compared with just 13% limited by the Basel I floor today. Of smaller, ‘Group 2’ banks – those with less than €3 billion in Tier 1 capital – that use internal model approaches, 13% will have to capitalise to the level of the output floor, compared with 4% today.

The capital adding effects of the output floor vary across regions, however. Group 1 banks in Europe are expected to swallow a 6% increase on current Tier 1 capital requirements because of the output floor, while those in the Americas may see a 2.2% decrease. Those in the rest of the world are also anticipated to see a decrease, of 1%. 

The output floor will be phased in gradually from a 50% level in 2022 to the full 72.5% level in 2027. 

A European Banking Authority (EBA) study goes into greater detail on the output floor capital increases to be faced by European Union firms. When the full 72.5% floor requirement is applied in 2027, the cumulative Tier 1 capital increase on current levels will be 6.3% for all banks, 5.4% for G-Sibs and 5.3% for Group 2 lenders. 

Notably, smaller banks will incur larger increases in minimum capital requirements earlier on in the phase-in period than their larger peers. Group 2 banks will experience a 1.4% capital increase in 2022, compared to a 0.3% increase for Group 1 lenders.   

But from 2024 onwards, the floor will add an increasing capital burden to bigger banks. The floor will add 1% on top of current G-Sib's capital requirements in 2024, 2.3% in 2025, 3.8% in 2026, and 5.4% in 2027.

What is it?

The Basel Committee published an updated monitoring report on October 4 to gauge the likely effect of the Basel III reform package on banks’ regulatory capital and liquidity requirements. 

The EBA published a concurrent report with more granular data on how the new rules will specifically affect EU firms.   

Why it matters

That more firms will be constrained by a floor based on Basel’s standardised approach in 2027 than today may please regulators, many of which lobbied for the committee to include the measure to prevent banks from using their internal models to lower their capital requirements, but could store up trouble for the future. 

If the largest banks start capitalising according to the standardised approach rather than the modelled approach because of the floor, a herd mentality could take hold with firms all allocating their capital in the same way, guided by the best returns achievable under Basel's one-size-fits-all methodology. 

If such crowding behaviours spread widely, then during a market reversal these banks could all react in the same way, by unwinding the same positions at the same time, transforming a downturn into a crisis. 

The regional variances in output floor effects also merit closer study. In particular, US regulators will want to deep-dive into the data to determine how the revised Basel floor will affect US banks. 

Today, US firms are required to hold enough capital to meet 100% of market and credit requirements as determined by the standardised approach, an obligation known as the ‘Collins floor’. Unlike the Basel III floor, the Collins floor excludes operational risk from the calculation.  

As Risk.net previously reported, this means that the larger the share of modelled operational risk capital as a share of a US bank’s total, the lower the effective Collins floor on modelled credit and market capital. 

According to this latest Basel study, it looks as though the different make-up of the Basel III output floor compared with the Collins floor means the former will have a muted effect on US banks. Whether this will spark a re-think of the Collins floor to better align capital requirements globally remains to be seen. 

Get in touch

Let us know your thoughts by dropping us a line at alessandro.aimone@risk.net, or send a tweet to @aimoneale or @RiskQuantum.

Tell me more

Revised Basel output floor could hit US banks after all 

Basel III changes set to create big winners and losers

Basel III: EU G-Sib capital requirement to jump 25%

View all regulator stories