UK regulator issues plans for bank ops resilience

By Steve Marlin | News | 6 December 2019
Bank of England

Bank of England to publish formal policy for recovering from disasters in 2020

The Bank of England (BoE) has released new proposals requiring financial firms to measure the impact of cyber attacks and other disruptions, and to devise plans for maintaining crucial business services for customers.

The draft rules lay down standards for operational resilience, including a requirement to set time limits on a return to operations following an outage or failure.

Operational resilience reflects a shift in emphasis towards maintaining key services for customers once disaster strikes, and away from systems-led continuity planning, which involves creating a series of responses to specific events.

“[The rules] set impact tolerance at the service disruption level, as opposed to traditional business continuity, such as ‘we don’t want our servers to be out for more than two hours’. Instead of putting the focus on the system, they are putting the focus on service delivery,” says Evan Sekeris, a partner in the financial services practice at consultancy Oliver Wyman.

The long-awaited rules take the form of a policy statement and a series of consultation papers released jointly by the UK’s three supervisory authorities: the BoE, Prudential Regulation (PRA) Authority and Financial Conduct Authority (FCA).

The policy document notes that impact tolerance requires firms to determine the impact of an event that has already happened, as opposed to risk appetite, which tries to calculate the likelihood of an event occurring and the financial costs.

The draft rules establish three pillars for operational resilience: identifying important business services, setting impact tolerances, and ensuring firms are able to remain within those limits. Underlying the three pillars are a list of supporting requirements such as testing and operational risk management. Subject to feedback, the PRA plans to finalise its op resilience policy during the second half of 2020.

Under the proposals, impact tolerance would be expressed as a metric, such as maximum tolerable duration for a return to service, or number of transactions or customers affected. Companies would conduct self-assessments of their operational resilience, and communicate the results to the regulator.

The document notes that the BoE is considering setting specific impact tolerances for vital services that could affect the broader economy, such as payment systems, and that the PRA will consult on these tolerances once they’re finalised.

Evan Sekeris, Oliver Wyman

The PRA is not currently planning to set scenarios that companies would need to follow in order to test their ability to remain within impact tolerances, but said it could do so at some future point “if it considers it necessary”.

By creating a set of metrics, the regulator is in danger of undermining its stated objective of making companies more attuned to extreme events, warns Sekeris. Firms would end up managing to the metric, rather than to the broader issues of maintaining services.

“With metrics, there is a risk of going to a traditional business continuity mindset, where you try to figure out all the possible bad outcomes and have a plan for each one, instead of developing the muscle memory to operate in unforeseen circumstances,” says Sekeris.

Time lapse

The PRA had previously signalled that it would issue the paper by the end of the third quarter of this year. Some have speculated that the delay in its release may be partially due to differences over how prescriptive the policy should be, with some arguing that the regulator should give firms latitude to develop their own standards, and others arguing for a more active role by the regulator.

However, people familiar with the matter suggest the hold-up owes more to the time required to co-ordinate the joint efforts of the BoE’s policy development team, the PRA and FCA.

A spokesperson for the BoE declined to comment.

An operational risk manager at a UK bank says the industry has been pushing the BoE to work in concert with other regulators when writing its rules on areas such as impact tolerance and targeted return to operation times: “We’ve been lobbying very hard behind the scenes. As there’s been some take-up of this concept of resilience among other global regulators – the US being one – we’ve asked that the PRA work to engage them more formally. With the Basel structure there, it was the most logical way.”

A UK bank op risk manager says the industry has been pushing the BoE to work with other regulators when writing its rules on areas such as impact tolerance and targeted return to operation times

For international banks active across borders, it is particularly important for regulators to devise harmonised rules so as to avoid the problem of clashing requirements, he points out.

“We don’t want to create a discordant regulatory environment, where it becomes impossible for us to operate across multiple jurisdictions because they’re all setting regulations that in some instances conflict with each other,” says the manager.

But he adds: “We’re definitely going to see [different] regulatory flavours.”

US regulators had considered mandating a two-hour return to operations following a cyber attack, but the proposal was dropped following industry criticism that it was unrealistic.

Noting that the Basel Committee on Banking Supervision is working to develop a set of metrics for operational resilience, Arthur Lindo, chair of the committee’s operational resilience working group, said at the Op Risk North America event in June that US and UK regulators are taking different approaches on some aspects of operational resilience policy.

For example, while UK authorities require mandatory stress-testing of one-off tail events, US regulators do not, said Lindo, who is also a deputy director in the US Federal Reserve’s supervision and regulation division.

Additional reporting by Tom Osborn; editing by Alex Krohn

NY Fed’s Stiroh: ‘More work to be done’ on bank culture

By William Towning | News | 5 December 2019
Kevin Stiroh

Supervision chief warns on new risks from machine learning bias

In 2014, the Federal Reserve Bank of New York launched its first conference on reforming culture in the finance industry. The event, which later evolved into a series of conferences, workshops and training programmes, sought to tackle shortcomings in bankers’ behaviour, which had been dramatically exposed during the 2008 financial crisis and subsequent Libor scandals.

“The consequences of inaction seem obvious,” then-New York Fed president William Dudley said in a 2014 speech.

These consequences included the possibility that banks might be broken up if they refused to change. “They are both fully appropriate and unattractive – compared to the alternative of improving the culture at the large financial firms and the behaviour that stems from it,” Dudley said. “So let’s get on with it.”

Five years on, Risk.net’s sister publication Central Banking spoke with Kevin Stiroh, head of supervision at the New York Fed, to hear about what improvements have been made. “We’ve seen progress, but we expect to see more,” he says.

New York Fed

Since 2014, advances in technology have started to affect both firms’ and supervisors’ risk management operations. As technology becomes cheaper and computing power gets stronger, the New York Fed expects it to both improve banking culture and create new challenges for supervisors.

“Banks can use technology in ways to augment their own internal risk management, whether it’s looking at transaction flows from an anti-money laundering perspective, or whether it’s real-time monitoring of their own employees, their communication channels and their trading patterns,” says Stiroh.

Technology will also enable supervisors to create better means of analysing banking culture than in the past, he says. For example, supervisors can examine transaction flows for anti-money laundering purposes or use natural language processing, a form of machine learning, to understand risk reports with greater efficiency.

Stiroh notes, however, that it is still “too early” to know the true impact of technological change on culture, as new risks will likely emerge: “If you build machine learning algorithms to make credit decisions that are so complex that you don’t really understand the underlying decision-making process, how will you be sure that there are not embedded biases in those algorithms just like there might be embedded biases in each individual?”

The impact of technology was highlighted as the second “greatest challenge” for 2020 by the 900 senior compliance practitioners who took part in a recent Reuters survey.

“We see lots of ideas, pilots and projects under way to do, for example, network analysis within firms trying to understand how information flows [and] how standards of behaviours are set,” Stiroh says.

Uneven progress

While technology may help accelerate change in the coming years, Stiroh says it is “fair to say progress has been uneven and that there’s more work to be done”. The variation of progress across the industry is “very much” driven by the approach taken by individual chief executives, he says.

Banks can use technology in ways to augment their own internal risk management, whether it’s looking at transaction flows from an AML perspective, real-time monitoring of their own employees, communication channels or trading patterns

Kevin Stiroh, Federal Reserve Bank of New York

The tone they set for the organisation “trickles down” across the business departments. “We see some very specific examples that we think are consistent with building or investing in a firm’s cultural capital,” he says, highlighting a greater appreciation of risk culture by senior managers, the creation of ethics sub-committees and more robust employee evaluations as key steps.

“Cultural capital” is a principle the New York Fed uses to define a bank’s behaviours, mindsets and the norms that determine how people act. “Just like equity capital makes the firm more resilient to credit losses, cultural capital makes the firm more resilient to potential misconduct losses or misconduct events,” explains Stiroh.

Setting the tone

One aspect of the poor banking culture preceding the 2008 crisis was often defined as banks putting their own profits ahead of their customers’ and clients’ interests. Recent research conducted by political economist Huw Macartney finds that from 2011 to 2016 there was a sharp increase in the number of mentions of customers, values and ethics – from just over 50 to roughly 325 on average – in large US and UK banks’ annual reports.

Macartney’s evidence suggests a shift in the strategies of Bank of America, Barclays, Citigroup, Lloyds, Goldman Sachs, JP Morgan, Royal Bank of Scotland and Wells Fargo towards a greater emphasis on ethical banking culture in their communications. Furthermore, earlier this year, many of these banks’ CEOs, among nearly 200 other chief executives, signed a statement committing to fairer and more ethical business behaviour.

“I think it’s a good-faith effort, and we will see over time how sustainable it is,” Stiroh says about the improvement in the communication of ethics. “One test will be how much of this persists when maybe the focus from the official sector is different than it is today.”

He says it is important that the increase in public recognition moves from being an objective in any given year to being part of how a firm operates in the long run. “When we see these types of issues are being factored into hiring, performance evaluation and promotion decisions on a routine basis, I think that will be a sign that we have some stability and sustainability in this focus,” he says.

Macartney’s research also points to a greater emphasis on risk management, with mentions of ‘risk’ in annual reports rising from just over 6,000 in 2010 to nearly 10,000 in 2016.

Stiroh says supervisors have seen a “greater appreciation” by banks for the value of a strong independent risk management culture

Consistent with the results, Stiroh says supervisors have seen a “greater appreciation” by banks for the value of a strong independent risk management culture.

A result of this is a shift in banks’ mindset away from what is often referred to as a “bigger is better” approach. In part, this is also influenced by higher compliance and regulatory costs for large firms.

One of several examples of this is Deutsche Bank, which significantly scaled back its global equities business as part of a restructuring to improve profitability. Furthermore, Macartney finds that many of the 150 bank staff members he interviewed for the research have observed a shift towards producing “higher-quality revenue and profits, rather than needing to be the largest bank in the business”.

Stiroh attributes part of this refocusing to the prevalence of stress tests. “Firms seem to be spending more effort thinking about ranges of possible outcomes and asking whether that distribution of possible outcomes is consistent with their risk appetite, their strategic direction and the tone that comes from the board of directors,” he says.

Stiroh notes that, while there has been an increase in attention to compliance, there is still some concern over how much is merely ‘box-ticking’, rather than effective risk management. “The challenge is getting the balance right between an effective risk culture versus ‘check the box but not the spirit of the expectations’,” he says. Achieving that it is a “hard thing to quantify”.

This article first appeared on sister website Central Banking

Clearing house power-downs raise fears among members

By Steve Marlin | Features | 29 November 2019

Banks question CCP resilience to system outages, as debate swirls over non-default losses

The lights have never gone off at clearing houses such as LCH and Ice, but from time to time they flicker.

Regulators have counted a few dozen operational outages at central counterparties over the past year – all of them brief systems interruptions that affected banks and other users in a limited and inexpensive way.

An extended shutdown would be more damaging, though. Clearing members are starting to ask whether CCPs are sufficiently capitalised to withstand both real and potential loss events of this nature.

“If there were to be a longer-term outage in places where we’re dependent on market infrastructures and subject to mandatory clearing, it could be a problem,” says Marnie Rosenberg, global head of clearing house risk and strategy at JP Morgan.

With clearing dominated by a few large CCPs in key markets, failure of a critical provider could magnify disruption for users and escalate losses.

“There’s only one major, global alternative for each major asset class from a swaps perspective, so there aren’t that many options,” adds Rosenberg.

Attention on clearing house resilience has intensified following a nine-digit default at Nasdaq Clearing, and the industry is wrestling with the slippery problem of who should pay for losses caused by events unrelated to member default, such as cyber attack, natural disaster or IT meltdown.

Lawmakers have proposed recovery and resolution rules for clearing houses, including provisions for non-default losses, but the plans are controversial. Participants fear they will end up paying the bill for events that are out of their control.

To date, reported outages have been relatively infrequent, and have not resulted in significant financial harm. LCH, the world’s largest swap clearer, suffered 15 failures of its core clearing systems during the 12 months to the end of June, according to public data. These failings caused the systems to be offline for seven hours and 35 minutes in total. Ice Clear Europe reported six failures, lasting a combined total of two hours.

No other CCP reported more than three outages during the same 12-month period. For LCH, the 15 reported shutdowns represent an aggregate for the five separate clearing services LCH operates for individual asset classes: swaps, foreign exchange, repos, equities and commodities. A single systems issue that affects all services is reported multiple times.

Clearing house operators are keen to emphasise that losses from systems failures are rare and, where they do occur, the impact is limited.

An LCH spokesperson says: “As a leading global multi-asset class clearing house, LCH plays a systemically important role in financial markets. We take our commitment to best-in-class risk management and resiliency seriously. Our track record of 99.97% availability speaks for itself.”

Dmitrij Senko, Eurex Clearing

Dmitrij Senko, chief risk officer at Eurex Clearing, says: “Operational losses are typically small in contrast to operational risk hypothetical scenarios that can be large. Real losses happen, but they’re extremely small.”

The financial impact may be small, but the disruption can be widespread. VP Securities, the Danish central securities depository, suffered technical glitches during its migration to the Target2-Securities settlement platform in 2018, resulting in thousands of unmatched trades. The problems were ascribed to inadequate processing capacity, leaving VP unable to handle incoming instructions from customers and T2S. The firm later admitted it had failed to test the new system sufficiently.

Failures are not restricted to commercial entities. A meltdown in the Bank of England’s real-time gross settlement system in 2014 affected payments totalling nearly £300 billion ($390 billion). The system is undergoing a complete overhaul.

The rules, as they stand

Large banks measure and model the effect of events with low probability but potentially severe financial impact as part of stress tests such as the US Federal Reserve’s CCAR regime. Many believe that CCPs should also consider such one-in-100-year events. “Living in Holland, you should think about what would happen if the dykes burst. We should think about it, but nobody wants to think about it,” says Arnoud Siegmann, chief risk officer at EuroCCP, which clears mainly cash equities.

Like banks, CCPs in Europe are required to hold capital for operational risk under the European Market Infrastructure Regulation (Emir). CCPs must calculate capital using either the basic indicator approach or, if their supervisor allows them, the advanced measurement approach.

The basic indicator approach uses a simple percentage of revenues to set capital – in this case, 15%. The advanced measurement approach is a more risk-sensitive measure, but is floored at 80% of the capital calculated under the basic indicator approach.

LCH, Eurex Clearing, EuroCCP and SIX x-clear use the basic indicator approach.

In the US, clearers are required to hold one year’s worth of operating expenses to cover non-default losses under rules laid down by the Commodity Futures Trading Commission.

Infrastructure firms in general, including clearing houses, settlement systems and depositories, must hold at least six months’ worth of operating expenses to cover the costs of a recovery or orderly wind-down of operations, within the global CPMI/Iosco Principles for Financial Market Infrastructures. In a disclosure, the Depository Trust & Clearing Corp states that it maintains a plan for raising additional capital should it fall close to or below the minimum amount required.

Scaling capital to operating expenses is a cause for concern, clearing members say. Without full insight into the level of operational risks being incurred, users complain they can’t know whether the CCP holds enough capital to mitigate those risks.

The rules are reasonably clear in a business-as-usual, non-recovery scenario. The challenge is that CCP capital should be scaled to operational risk, not operational costs

Bill Stenning, Societe Generale

“The rules are reasonably clear in a business-as-usual, non-recovery scenario. The challenge is that CCP capital should be scaled to operational risk, not operational costs. There’s plenty of scope for operational issues to give rise to significant losses, and therefore operational capital should be scaled appropriately,” says Bill Stenning, managing director of clearing, regulatory and strategic affairs at Societe Generale.

For CCPs, principal sources of operational risk include availability, service deficiency, damage to physical assets, and legal risk. A report on CCP resilience prepared by prominent sell-side and buy-side firms notes that CCPs are exposed to “cyber attacks, operational failures, fraud, theft, malicious acts of employees, and credit deterioration”. All of the disclosed outages at LCH were related to trade registration – neither margin calculations nor the margin runs were affected.

CCP disclosures provide a quarterly snapshot of system outages, but offer little further detail. Participants are left to wonder whether the reported interruptions are trivial, or signs of more troubling problems brewing underneath the surface.

Ulrich Karl, Isda

“To the extent that these incidents feed into public disclosure, are they are mere annoyances or are there some potential losses that can be much higher?” says Ulrich Karl, head of clearing services at the International Swaps and Derivatives Association.

Operational losses are a subset of non-default losses, a category that also includes investment and custodial losses. Options Clearing Corporation earlier this year filed for a rule change that would distribute part of operational losses – cyber, fraud, theft and others – to its members. Ice Clear Credit filed to have its members cover part of the cost of investment and custodial losses, but excluded pure operational risk.

The fact that clearers are taking different approaches to the treatment of non-default losses raises the uncertainty that a large operational loss may exceed a CCP’s available resources.

“There needs to be an appropriately sized capital framework to cover non-default losses similar to what banks have,” says Rosenberg from JP Morgan. “The amount of op risk capital they hold, which is a few hundred million at best, may not be enough to recover from a major operational loss event.”

Defensive measures

CCPs maintain dedicated teams of op risk professionals to ensure that business operations can survive a disruption. EuroCCP has four staff members working in operational risk, among a total headcount of 60. “The op risk people are mainly concerned with ‘did something not go according to procedure?’ – in which case, you rate the event, and if it’s serious, we make it an incident. And if it’s really serious, it becomes a crisis. This can be small things like somebody losing a laptop to serious things like the whole system doesn’t work,” says Siegmann.

Like most large organisations, EuroCCP maintains dual data centres with failover, or backup, capabilities to shield its operations from hardware failures. It also maintains a golden copy of daily transactions stored offline. Trades are netted at the end of each day, and settlement instructions are sent to central securities depositories.

“In that sense, you can rebuild your administration pretty quickly from what is already in the central securities depositories, because you send that every night. For a cash equities CCP, it’s relatively easy to reproduce what is there. But the situation might be more complicated for a derivatives CCP,” says Siegmann.

Protective measures are designed to ensure that clearing houses, a crucial building block in the financial edifice, continue to operate as required. Regulators acknowledge the systemic importance of these entities, and have explored how member defaults could rapidly spread contagion through a wide network of clearing firms.

A senior executive at one CCP says his firm is aware of its importance in maintaining the basic functioning of markets: “Operational risk in general and cyber risk in particular is one of the things that takes up most of my time – whether that’s overseeing penetration testing, running scenarios, or resilience planning our members and key service providers. We’re incredibly cognisant of our role as a key hub for derivatives trading – people depend on us for the market to function.”

Aside from existing legislation covering financial market infrastructure, such as Emir and Mifid in Europe and the Dodd-Frank Act in the US, lawmakers are debating new recovery and resolution rules for CCPs. Proposed European Union laws would require a CCP to contribute a portion of its capital in the event of a non-default loss, but clearing members say this is too low to cover losses occurring from extreme events such as cyber attack.

Bill Stenning, Societe Generale

“As we focus more on that topic, we need to have thought through the what-if scenarios. We haven’t yet had a situation where a CCP has been down for an extended period of time. But the whole point of recovery and resolution planning is to have in place an idea of what you’ll do if and when such a thing happens. As you test these plans, you may find things that aren’t suitable,” says Stenning at Societe Generale.

With billions on the hook in the form of posted margin, banks are pushing for a greater say in how CCPs manage risks that could have an impact on collateral, especially investment and operational risks. End-users, too, would welcome representation on a committee that oversees CCP governance and controls.

“There needs to be some mechanism for CCPs to gather member and participant feedback before there’s a material risk change that impacts our exposure,” says Rosenberg.

Any process for soliciting user input on non-default losses needs to be separate from CCP risk committees, adds Stenning. This is necessary to ensure member voices are heard and their views are treated independently.

If clearing members have their way, they won’t be the ones left alone in the darkness when the lights go out at a clearing house.

Editing by Alex Krohn

Industry-led op risk taxonomy launches

By Steve Marlin | News | 27 November 2019

Scheme aims to complement Basel classifications, ease peer comparison

A new bank-led system for categorising operational risks, developed by industry consortium ORX, reflects the increased importance being accorded to risks outside traditional market and credit risk parameters since the publication of the Basel Committee on Banking Supervision’s taxonomy.

The new taxonomy is intended to create a common language for financial institutions to share information and a framework for understanding the causes and effects of operational loss events. It will also allow for easier comparison between institutions when benchmarking their findings.

“As operational risk moves more toward non-financial risks, it makes sense to have a fresh look at the taxonomy,” says Jonathan Humphries, executive director for non-financial risks at Aon. “It’s an important starting point for any discussion around measurement as well as interaction with the business.”

Developed by ORX with Oliver Wyman, the project was conducted with input from an advisory group of ORX member institutions, and draws on some 60 taxonomies submitted by member firms. It has been in development for more than a year.

“This is a good example of collaboration across the banking industry to develop a real-world risk taxonomy for non-financial risks,” says Mark Cooke, group head of op risk at HSBC, who also chairs ORX. “For many banks, this will in large part mirror what they already have, yet it allows them to benchmark to common practice and [to] highlight areas they may want to evolve.”

In analysing bank taxonomies, ORX and Oliver Wyman noticed a wide divergence in the way banks were categorising risks such as cyber, conduct and third-party risk. This was due mainly to differences in how firms classify cause and effect. For example, an external fraud that’s perpetrated through a cyber attack could be classified as cyber risk, or as external fraud, with cyber risk as the underlying cause. Similarly, a technology failure that affects customers could be classified as either conduct risk or as a technology failure with a customer or conduct impact.

Four years ago, there was still a reasonably high correlation with the Basel event categories

Op risk executive at a large UK bank

“Four years ago, there was still a reasonably high correlation with the Basel event categories. Today, there is a greater propensity to extend beyond those categories,” says an operational risk executive at a large UK bank. “We now have [through ORX] the ability to get our categorisations aligned in a way that’s useful.”

The taxonomy includes 16 Level 1 risks and 61 Level 2 risks. Included within the Level 1 risks are six of the original seven Basel categories. The seventh Basel category – losses that occur due to events involving clients, products and business practices – has been expanded into four new Level 1 categories. These are: legal; conduct; financial crime; and regulatory compliance. The remaining six Level 1 categories – third-party, statutory reporting and tax, business continuity, data management, information security, and model risk – represent risks that have risen in prominence.

Operational risk experts argue the ORX taxonomy creates a starting point for discussion and potential convergence around the causes and impacts of specific events. The need for such discussion is evidenced by the fact that companies now define an average of 14 Level 1 risks. Left unchecked, this divergence could lead to further splintering, making interbank comparisons almost impossible.

Mapping to Basel

As the number of Level 1 risk categories increase, so does the possibility of overlap between the way categories are defined. For example, model risk, which is one of the new Level 1 categories in the ORX taxonomy, can result from putting a model into production with inadequate testing, which would stem from a failure of transaction processing and execution – another Level 1 risk.

Operational risk is complex, and so many of the risks are derivatives of the seven core event types

Ken Abbott, NYU

“Model risk issues fall under the existing Basel category, execution, delivery and process management,” says Ken Abbott, professor at New York University and former chief risk officer for the Americas at Barclays. “At Barclays, model risk was elevated to the top of the house because it is a major regulatory focus, and it can be a source of big problems if not governed correctly. Operational risk is complex, and so many of the risks are derivatives of the seven core event types.”

The new taxonomy is intended to augment the existing Basel taxonomy, which has been in use since 2001, rather than replacing it entirely. Many banks already map their own taxonomies to Basel, and that’s expected to continue.

“It doesn’t necessarily perfectly match anyone’s current taxonomy,” says Luke Carrivick, head of analytics and research at ORX. “There are going to be differences, and people will use it as a reference, providing a way of solving how they go about forming their own taxonomies.”

ORX has kept the Basel Committee apprised of its work, and asserts that Basel is pleased the private sector is taking its own initiative to manage operational risk. The committee has declined to comment.

The consortium will incorporate the taxonomy into its ORX News service, which is free, in order to test its usefulness. It plans to rerun the exercise in 18 months to monitor changes to its member banks’ taxonomies and to look for signs of convergence.

“ORX News is a great test to see how well the taxonomy works in practice,” says Carrivick. “We hope that over time institutions will start to converge to the extent that people can share data.”

Regulators have also begun to focus on the need to distinguish between causes and effects of operational risks, particularly non-financial risks such as cyber risk, as is the case with a new system for recording losses from cyber risks unveiled last week by the US Federal Reserve. The Fed is weighing whether to require banks to report losses from cyber attacks in addition to more traditional forms of risk.

Buy-side risk manager of the year: Vanguard

By Risk staff | Analysis | 26 November 2019
Nagar Manish

Risk Awards 2020: Fund giant gave more risk work to machines this year - from duration hedging to op risk

Manish Nagar, head of risk at Vanguard, set his team a challenge at the start of 2019: to do less. Since then, the team has automated the equivalent of 10,000 man-hours of previously hands-on work – roughly 5% of its output.

It’s not just the humdrum that has been given over to machines. Vanguard’s risk team constructed a new set of global computerised ‘algo wheels’ that pick the best execution algorithms to allocate the firm’s trades to. It built an optimiser that systematically evens-out unwanted rates exposure. In operational risk, Vanguard used machine learning to scour trading data for signs of mistakes or bad practices. 

“Technology is at the forefront of everything we do,” Nagar says. “The intersection of risk and technology continues to grow. Big datasets are available for us to process, and you need a technical skill set and an investment skill set to get intelligence out of that data.”

With 90 people located across sites in Australia, the UK and USA, Vanguard’s global risk division oversees market risk and operational risk, and includes a quant research team that contributes to the formulation of risk models and relative value tools.

The quant team, which also runs Vanguard’s transaction cost analysis program, spent much of the past year working on the firm’s automated algorithm wheels and a system for generating daily broker-dealer scorecards, Nagar says.

The wheels make their choices with zero human interference, using tick-level market data, on-demand computing power and data storage from the cloud. The best algos get more business. Weaker algos get shunted into the “penalty box”. No single person could do the calculations to make those calls rapidly enough, Nagar says.

“As you can imagine, there are tons of sell-side firms out there. Each will have 10 to 15 different algos they’re trying to push. It’s impossible for us to evaluate the algos unless we measure the performance of trades,” Nagar says. Dealer scorecards are used to track performance versus their peers across the different services they provide.

The project has saved “millions of dollars” in transaction costs for Vanguard’s funds, Nagar says.

In fixed-income trading, Vanguard deployed a new optimiser and automated trader to level out the firm’s exposure to specific points on the yield curve.

Big datasets are available for us to process, and you need a technical skill set and an investment skill set to get intelligence out of that data

Manish Nagar, Vanguard

“We didn’t want portfolio managers to be exposed to duration in corporate portfolios,” Nagar explains, referring to “active” or unwanted duration between funds and their respective benchmarks, because maturities of the bonds in each don’t exactly match, for example. The optimiser and automated trader effectively hedges exposures at different points on the curve without a human trader having to calculate how many bond futures to buy.

The firm also overhauled its methods in its risk-taking framework in fixed income. Previously, when fund managers put on different types of hedging trades – an inflation or steepener trade, for example, or a mortgage trade – the risk team worked to keep exposure within a set of common ‘guardrails’. The risk team realised, though, that these types of universal controls failed to recognise the different levels of risk incurred by different trades.

“We modified the whole framework this year for all rates portfolio managers around the globe,” Nagar says. The team transitioned to a risk-based framework where hedging exposures are driven by past covariances between asset classes. The new curbs are therefore set using a more forward-looking view of the aggregate risk in a portfolio as opposed to managing allocations at the exposure level, Nagar says.

Machine learning

In operational risk, Vanguard applied machine learning to risk-level data. The firm has a forensics team tasked with looking at multiple fields of data and using machine learning tools to find patterns that indicate errors or potential wrongdoing. A portfolio manager might execute buys and sells on the same trade, for example, or input and then cancel orders just before they are placed in the market.

“Those things are very hard to pick up on a day-by-day basis,” Nagar says. “But if you look at longer-term trends of data, using our operational risk framework and the machine learning risk tools the forensic team has put to work, you can figure out any trends of that sort going on.”

One pattern the team observed was that portfolio managers would sometimes take outsized exposures on favoured tickers. That is inconsistent with Vanguard’s philosophy of diversifying sources of alpha, Nagar says, so the firm introduced a new framework this year on issuer concentration.

“We put a policy in place that says we are not willing to have more than ‘x’ position from a single issuer, which reduces this behaviour and encourages more diversification,” he says.

To support its ongoing automation work, Vanguard set in motion a hiring drive – to bring in new people with a knack for technology as well as risk management. “We have made tons of changes in our global organisation, hiring some PhDs, as well as some talent from outside of Vanguard in our three sites,” Nagar adds.

The work will continue, Nagar says. “Technology is something we have gotten much better at in 2019 and something that will definitely be on the agenda in 2020, as well.”

Industry to take up Fed’s white paper on cyber risk

By Steve Marlin | News | 15 November 2019

Workshop will delve into definitions and classifications of the most chameleon of risks

The financial sector will get a chance to rake over a white paper by Federal Reserve researchers that attempts to define and measure cyber risk at an industry workshop in Charlotte, North Carolina, next week.

The meeting, slated for November 20, aims to bring uniformity to gauging cyber risk, a category that encompasses the threat of hackers from far-flung corners of the planet, thieving in-house employees, defects in software, as well as innocent screw-ups by staff. The meeting is sponsored by the Federal Reserve Bank of Richmond, where four of the proposal’s five authors work.

The white paper, which appeared in August, treats cyber as a form of operational risk – not all banks do – and has the twin objectives of creating a common language for cyber losses and putting together a record of incidents that can be shared by the whole industry.

The paper classified cyber risk five ways: by cause, consequences, whether internal or external parties were involved, whether it was intentional or not, and by the Basel Committee on Banking Supervision's cyber event categories.

The causes refer to the method of attack: denial of service, phishing, malware, man-in-the-middle attacks, stolen passwords and zero-day attacks, among others. Consequences include business disruptions, system failures, breaches and theft.

Notably, the white paper called for cyber incidents to be mapped to one of the seven event types in Basel’s op risk taxonomy – this is seen as a prerequisite to coming up with a common way of measuring the fallout of cyber events. Some banks treat cyber as a subcategory of fraud, for instance, while others treat it as a distinct category. Greater harmony would go a long way to promoting comparability and data sharing among banks.

“There is a great deal of divergence in the industry on whether cyber should be a self-contained category,” says Evan Sekeris, a partner in the financial services practice at Oliver Wyman, and a former assistant vice-president at the Richmond Fed.

“The paper makes clear that cyber is a form of operational risk,” he continues. “You don’t need a separate cyber taxonomy. If everybody uses the same taxonomy, you can start comparing data and having a dialogue.”

Getting a grip on cyber

The paper’s authors hope to provide a template for collecting and reporting data on cyber losses using some 20 fields, among them date of discovery, loss and recovery amounts and business line. Only incidents resulting in losses would need to be reported; near-misses and forgone revenues from cyber failures would be excluded.

Cyber risk experts praised the white paper as a needed step towards getting a grip on cyber, the top operational risk at banks.

“Nomenclature is the Achilles heel for the cyber risk profession,” says Jack Jones, creator of the Factor Analysis of Information Risk methodology, a widely used system for measuring the impact of cyber risk. The white paper “represents an important step for financial services risk management around cyber”, adds Jones.

Cyber breaches, fraud and disruption to business resulted in $935 million in losses in the financial sector in 2018, according to ORX News. But that number is just publicly reported losses – most firms are averse to broadcasting their mishaps, preferring to keep quiet – or, as the Fed’s paper notes, to chalk them up as something else.

Back in March, an initial workshop at the Richmond Fed highlighted the lack of agreement among banks on what constitutes cyber risk, and how to tie it into existing op risk taxonomies. The white paper’s authors aim to finalise a system for classifying cyber risk by the end of this year, to be followed in 2020 by developing a system for measuring financial losses.

The paper’s authors – a team of two economists, two quants and an analyst – emphasised that the views expressed are their own and do not represent official Fed policy. They also nodded to private efforts to define and measure cyber losses, and stressed that their white paper is intended to supplement, not replace those ideas.

Complex problem, complex solution

Whether the paper is adopted in its current form is an open question. Banks would need to make substantial changes to their existing frameworks for defining and measuring cyber risk, and may be reluctant to do so. They may also balk at its complexity. With its five ways of classifying cyber risk, it may prove unwieldy in practice, especially for smaller firms.

But industry experts say that shouldn’t deter the Fed – cyber risk is notoriously convoluted to begin with.

“Operational risk events have become so complex that one-dimensional taxonomies are not very useful,” says Sekeris. “Some might feel that it’s overly complex, but complex problems require complex solutions.”

Most firms have used variants of the Basel taxonomy as a starting point for their own taxonomies, which may include cyber risk. ORX, the industry consortium, is developing its own op risk taxonomy, and has a separate project for sharing of cyber loss information, best practices and taxonomies.

“The Fed’s work is one of several complementary initiatives to ORX’s own cyber risk programme,” says Luke Carrivick, head of analytics and research at ORX. “We welcome the overall message of sharing information for cyber risk management. This is exactly what our member institutions have been asking for.”

Industry experts view the Fed and ORX efforts as trying to bring a semblance of order to the current conflating of cyber and operational risk.

“I was pleased to see that the paper regards cyber as a form of operational risk. The various operational risk silos hinder the development of a holistic understanding of operational risk in many firms,” says Andrew Sheen, an op risk consultant and former manager at the UK’s Financial Conduct Authority.

Robo-raters help banks vet vendors for cyber risk

By Costas Mourselas | Features | 12 November 2019

Specialists tout service for monitoring third parties amid tougher rules on outsourcing risk

If you want to reduce the risk posed by third parties to your organisation, you hire another third party to police them.

This concept may not be intuitive, but cyber risk rating companies such as BitSight, RiskRecon and SecurityScorecard have made it central to their business proposition.

These companies are trying to offer an alternative to the staple methods of third-party risk management, where banks vet vendors using questionnaires, lengthy audits and site visits. Instead, the rating companies scrape the internet for any data that can help paint a picture of a third party’s cyber security defences and their vulnerability to cyber crooks.

Financial institutions are weighing up the service as they struggle to manage the risk posed by an intricate network of third parties. Many of those third parties themselves outsource to external vendors, creating a complex web of vendor relationships for banks to monitor.

“It’s risk management once removed, and it’s a problem the whole industry faces,” says Richard Downing, head of vendor risk management at Deutsche Bank in London.

Banks hoping for a magic bullet from cyber risk rating companies may be disappointed, though. There are questions over whether the ratings provide a sufficiently comprehensive measure of vendor risk. Some believe ratings can only ever complement, not replace, banks’ own internal vetting processes.

Regulators are well aware of the problem. The US Federal Reserve is focusing on vendor risk management as one of its supervisory priorities for the country’s largest banks, while the European Banking Authority has released stringent guidelines on outsourcing arrangements. The European Securities and Markets Authority plans to release its own outsourcing guidelines for financial firms not under the purview of the EBA next year.

The spectre of data loss is one of the biggest fears for risk managers, judging by Risk.net’s annual Top 10 op risks survey, which in 2019 placed data compromise in the top slot for the first time. As well as the costs from reputational damage and customer remediation, data loss can also attract swingeing fines under Europe’s sweeping General Data Protection Regulation (GDPR) laws.

Know the score

Cyber risk rating providers employ big data techniques to gauge the cyber security capabilities of firms, scraping the internet for information that can provide clues as to a company’s resilience against hacks, outages and other threats. The data is aggregated and run through an automated program, which scores the data along preset parameters. These scores are weighted to produce a security rating. SecurityScorecard has a 100-point system and gives out grades on a scale of A to F, with a report card that highlights what actions can be taken to improve the grade. BitSight offers a rating on a scale from 250 to 900 points, similar to a credit score, and Risk Recon provides a score anywhere from zero to 10.

Broadly, these services monitor whether a firm’s systems are properly patched, the health of domain name systems (DNS), the security of a company’s network and other factors. Patching, or updating, the software used by companies is a basic but important way to avoid cyber breaches, experts say, as hackers can exploit temporary holes in security in unpatched software. DNS is the decentralised way in which entities are labelled on the internet, and companies must make sure to monitor their own DNS designations to avoid malicious activity – for example, attackers being able to affect internet traffic or impersonate a company’s email address.

Some of the cyber risk ratings apply a very good layer of analysis to the data they gather … But the data analysis of some providers can be of low quality, so can’t be used as a decision point in a risk assessment

Charles Forde, Allied Irish Bank

However, these services can go beyond just monitoring the perimeter of companies’ security infrastructure. SecurityScorecard also eavesdrops on web chatter about companies to determine if data has been leaked or if hackers are planning to launch a cyber attack on a target.

Similarly, BitSight boasts of having access to one of the largest cyber sinkhole infrastructures in the world, after acquiring a Portuguese cyber analytics firm in 2014. The sinkhole is a huge dragnet that intercepts fake URLs. Often, this type of malicious traffic emanates from groups of infected computers referred to as botnets. By accessing these botnets, BitSight, SecurityScorecard and other firms can track communications sent by the computers and obtain a worldwide view of the ebb and flow of infections. This can provide some important intelligence on the vulnerability of different firms to potential cyber attacks.

“Access to this sinkhole lets us know when malicious links are clicked, as our sinkhole intercepts the message sent back to the hacker,” says Jake Olcott, vice-president in communications at BitSight.

SecurityScorecard also says it uses cyber sinkholes to aid monitoring. The company’s vice-president of international operations, Matthew McKenna, says automation is important in enabling cyber rating firms to increase the range of vendors they cover. He claims the firm scores 1.1 million companies.

RiskRecon was unable to respond to requests for comment.

The breadth of coverage offered by rating providers may be a draw for multinational companies that need to set variable levels of risk tolerance depending on region or market.

Charles Forde, Allied Irish Bank

“Take a firm with an asset management business in the US and a wealth management business in Singapore,” says Charles Forde, group head of operational risk at Allied Irish Bank. “You will likely have a different risk appetite for vendors in these different regions so you can tailor your findings to each business. A score might be acceptable for one business but not another. That flexibility is useful.”

Cyber rating firms operate under a subscriber payment model. This sets them apart from their credit rating agency cousins, which use an ‘issuer pays’ model – a structure that some claim introduces perverse incentives into the rating process.

“Our business is similar to that of a conventional credit rating agency, but there are some fundamental differences.” says Olcott. “In the financial ratings market, organisations pay to be rated, which can lead to a significant conflict of interest. For us, any organisation can pay to get on the platform and see the ratings of hundreds of thousands of firms.”

Fast response

Proponents of cyber ratings claim the service offers a quick and easy snapshot of a vendor’s vulnerabilities compared with the traditional vetting procedure involving questionnaires and audits.

“These utilities become very cost-effective because while an audit or questionnaire of a vendor can take a minimum [of] four to six weeks, these cyber risk rating services give you an answer immediately,” says Amit Lakhani, the global head of IT and third-party risks for corporate and institutional banking at BNP Paribas in London.

Financial institutions have the option to outsource the questionnaire process using an external monitoring services such as KY3P from IHS Markit or the TruSight utility from large American banks.

Allied Irish Bank’s Forde proposes an alternative approach to screening new vendor relationships using cyber risk ratings instead of questionnaires. Banks could request and affirm basic information that would normally be included within a vetting questionnaire, as minimum contract standards with vendors. The kind of information could include whether a vendor has a chief information security officer who sets policies, or what are the processes for data encryption. For more technical details normally requested in a questionnaire, the cyber rating firms can come into play, providing up-to-date information on cyber security policies.

“Cyber risk rating services offer an instant response on technical vulnerabilities, issues with patching and encryption, among other risks,” says Forde. “This approach also extends to discovery and monitoring more deeply into the supply chain, covering fourth parties.”

Gaining a detailed picture of the supplier relationships among vendors is hard for a large institution that might have hundreds of individual outsourcing arrangements. Cyber rating firms are starting to offer analysis of the chains of connection among vendors, to show third and fourth parties.

“If your supplier is subcontracting to another supplier, then these rating agencies can provide you with a view of the number of fourth parties your supplier has,” says BNP Paribas’s Lakhani. “It is very helpful to see if all your fourth parties are converging to certain cloud service providers such as Amazon Web Services [AWS] or Microsoft’s Azure platform. This could change your view of risk if it is determined that many of your third parties would suffer if any of these services were to go down tomorrow.

He adds: “As an organisation, this helps because the EBA is very interested in seeing where risk concentrations exist.”

New guidelines from the EBA, released in February, provide detailed principles on how to manage outsourcing risk from third parties. Banks must maintain a comprehensive register of outsourcing relationships and closely scrutinise vendors based on their “criticality” to the functioning of the business. The rules go beyond the scope of the outsourcing guidelines released by the Committee of European Banking Supervisors in 2006, ramping up the compliance burden with regard to third and fourth parties, banks report.

As regulators finesse their guidelines for the management of third-party risk, their expectations for how firms tackle cyber risk are also taking shape. US regulators initially favoured a tough approach that would compel financial institutions to introduce a two-hour return to operations following a cyber attack. The proposal was shelved after industry criticism, but the Fed is pushing ahead with an initiative to set common standards for classifying and modelling cyber risk.

In Europe, the GDPR rules over data privacy introduced last year have forced all companies that handle personal data to overhaul how they use and store that information.

“Regulations are tightening in respect to third-party risk monitoring and assurance,” says McKenna from SecurityScorecard. “As an example, GDPR requires organisations to continuously monitor and understand third-party risk related to data privacy.”

The EBA’s focus on concentration risk is designed to ensure firms are not becoming overly dependent on the functioning of certain key entities. Cloud services such as Azure and AWS are under particular scrutiny by regulators, as banks and financial market utilities such as clearing houses outsource important functions to them.

Deutsche Börse, one of the world’s largest exchange groups, recently signed a deal with Microsoft, acknowledging that the deal allowed it to place services into the cloud that were “typically considered essential” for firms’ core businesses. The Options Clearing Corp has started a multi-year project to modernise business processes, including using the public cloud.

Cyber risk ratings could offer a way of sourcing information about fourth parties as companies adapt to the stringent new guidelines. It is unclear if firms will be able to negotiate rights of access to information on fourth parties, as required by the guidelines, according to Deutsche Bank’s Downing: “It’s something the industry is working on with vendors.”

“It is quite difficult to ask for third parties to grant us audit and access rights for fourth parties,” he adds. “It is still being debated as to what exactly the EBA guidelines mandate when it comes to fourth-party risk management.”

Data crunch

Third-party risk has a broader scope than the outsourcing of tech services. Large financial firms connect with many service providers that are not bound by outsourcing contracts and may be reluctant to divulge vital information.

William Moran, chief risk officer for technology at Bank of America, recently said important financial market utilities such as central counterparties often would not answer questions about their cyber security arrangements.

“They either won’t participate at all – that is, they won’t answer your questions – or they won’t let you do an on-site [inspection], or they basically cherry-pick which questions they want to answer,” he said at the Risk USA conference in New York in November.

[Financial market utilities] either won’t participate at all or they won’t let you do an on-site [inspection], or they basically cherry-pick which questions they want to answer

William Moran, Bank of America

Regulators that usually have privileged access to company information “don’t tend to be very responsive about what they’re doing in terms of cyber”, he added.

“I think the notion of having single, independent groups trying to evaluate vendors for things like cyber is good,” he said.

While the principle of cyber ratings may sound persuasive, successful application of the concept is a different matter. For rating firms that track hundreds of thousands of companies continuously, providing a consistent level of analysis on the data scraped from the internet is crucial. Some suggest the ratings firms are not always successful in this regard.

“The level of much of the detail provided by these services is quite good,” says Forde of Allied Irish Bank. “I think the challenge is you can’t use all these services in the same way. Some of the cyber risk ratings apply a very good layer of analysis to the data they gather, providing accurate conclusions. But the data analysis of some providers can be of low quality, so can’t be used as a decision point in a risk assessment.”

James Tedman, a partner at ACA Aponix, an operational risk advisory firm in London, agrees that the concept of cyber risk ratings is valid but that there will always be gaps in the coverage these kind of firms offer.

“An ‘outside-in’ approach is a useful complement to questionnaires in assessing and monitoring vendor risk,” he says. “However, you can only get to a subset of risk by using these cyber risk monitoring services.”

Tedman adds that a real-time service based on data will not offer insight into more qualitative factors such as the level of staff awareness of cyber issues in a firm, or how susceptible the company is to a fourth party with access to the network.

“These are the sort of risks that cannot be captured from the outside, and require on-site risk assessments or questionnaires,” he says.

In other words, firms would be foolish to rely solely on external ratings for a complete picture of third-party cyber risk. Banks may need to devise internal processes to complement the information gleaned from ratings. Deutsche Bank is doing so with its protective intelligence unit that looks through news items to determine threat levels from vendors. The bank is working to better link this function with what it calls a “vendor criticality matrix”, which tabulates the systemic importance of third parties to the firm.

“There is a broader industry push to both use third-party services that help bank monitor vendors, but also to develop internal systems that follow news items about those vendors,” says Downing.

Third-party risk encompasses much more than a cyber risk rating can cover. Take, for example, the reputational risk that may affect a firm if it uses a vendor with poor working conditions. In other areas of tech, such as manufacturing, companies have faced public criticism over employment practices – Taiwanese firm Foxconn a prominent example.

To get a complete view of vendors, firms will have to employ a mix of oversight strategies, of which cyber risk rating firms are one element. The machines are not quite ready to take over yet.

Correction, November 12, 2019: An earlier version of this article stated that the Office of the Comptroller of the Currency was working on a project to modernise business processes, whereas the Options Clearing Corporation is the organisation concerned. The article has been corrected.

Additional reporting by Tom Osborn

Editing by Alex Krohn

Swiss banks ask, how about a magic trick?

By Alessandro Aimone | Opinion | 11 November 2019

Banks pull off an accounting trick – with the help of their regulator

What if there was a way for a bank to conjure higher net interest income (NII), without raising rates for loans or cutting them for deposits? What if it could do so with just a flick of an accountant’s pen? 

You would be wise to be sceptical, as Gotham City’s thugs were when Heath Ledger’s Joker asked: “How about a magic trick?” 

But in this context, there’s no need for any Hollywood special effects, nor a psychopathic clown. Just enterprising bank managers and a willing financial regulator. 

Two banks have already pulled off the trick: UBS and Credit Suisse. 

On October 1, 2018, UBS began reporting in US dollars, instead of Swiss francs and UK pounds.

This resulted in an immediate uplift in group reported NII – of $300 million annually. Abracadabra! The bank also benefited from reduced foreign exchange-induced earnings volatility, even though a hefty chunk of its assets and liabilities are denominated in currencies other than the dollar.

This October, Credit Suisse emulated its Swiss peer by pulling off a similar currency switcheroo for its operational risk-weighted assets. As of the end of the fourth quarter, these would be denominated in dollars instead of Swiss francs.

The bank said the move was justified as the majority of its historic op risk losses (read ‘fines’) were incurred in US dollars. But it also meant the bank would hold more capital in dollars than francs, which, given the interest rate differential between the two currencies, and the effects of changing its capital hedging programme, will yield Sfr60 million ($60.2 million) of additional NII in Q4. The total benefit on a full-year basis is expected to be $250 million.

Neither bank could have pulled off the feat without their glamorous assistant, Finma. The Swiss watchdog signed off on both changes.

How can a change in reporting currencies lead to millions in additional earnings? It sounds like sleight-of-hand. Certainly some magical thinking is involved. In Credit Suisse’s case, by choosing to denominate op RWAs in dollars, it has to hedge the equity capital held against these by investing in dollar assets – which offer a tastier pick-up than Swiss franc investments. Essentially, it gave itself permission to invest more in higher-yielding assets. 

Seeing through the illusions woven by financial reports is part and parcel of being an investor and, in these cases, shareholders have agreed to accept the changes as legitimate, rather than a case of accounting trickery. 

Op risk data: $250m legacy loan frauds hit Bric banks

By ORX News | Opinion | 8 November 2019

Also: costs from post-2012 cyber breaches top $2bn. Data by ORX News

Jump to Spotlight: Mozambique | In Focus: Data breaches

October’s largest operational risk loss relates to an Indian loan fraud dating back to 2011. Executives of Srinagar-based J&K Bank are under investigation for colluding with a rice processing company over commercial loans totalling 11.24 billion rupees ($158.3 million) based on fake documents.

REI Agro took out loans from J&K Bank branches in Mumbai and New Delhi under the pretence that it would use the funds to pay farmers who provided it with rice crop, authorities allege. In turn, REI Agro would sell the crop and deposit the resulting funds with the bank as loan repayments.

Investigators found there were no records of who drafted the loan documents and no vetting certificate from the bank’s law department. Fraudulent loans for plant and machinery are also part of the probe announced in October.

ORX News has recorded 58 events totalling $2 billion among commercial banks in India in 2019, representing 45% and 39% of the frequency and severity, respectively, for commercial bank losses globally. Many of the losses are the result of years-old activity but are only coming to light now.

 

The second largest publicly reported loss is a 6 billion ruble ($93.4 million) fraud carried out at Moscow-based Sudostroitelny Bank in 2015. Senior executives are said to have entered into fictitious deals aimed at siphoning off assets just before the bank’s licence was due to be revoked, according to an ongoing government investigation. Sudostroitelny Bank’s main office building was transferred to a real estate mutual fund before being sold to a company registered to a frontman. Some of the bank’s liabilities were also transferred to shell companies.

In October’s third largest loss, BNP Paribas’ Ukrainian subsidiary UkrSibbank was defrauded of 1.1 billion hryvnias ($44 million) in commercial loans. The scheme was reportedly organised by a former member of Ukraine’s parliament, Dmytro Svyatash, between 2008 and 2018. Svyatash is accused of large-scale fraud and falsification of official documents. As of October 8, investigations were ongoing.

In fourth place, Citi was fined $30 million by the US Office of the Comptroller of the Currency for operational failings relating to real estate assets. Citi first identified the violations in 2015, caused by the bank’s lack of adequate procedures to identify and monitor the holding period for the assets. The bank committed to addressing the deficiencies but additional violations occurred.

Citi said it had strengthened its controls, processes and procedures since 2015. The OCC acknowledged the bank’s efforts to significantly reduce its inventory of the assets.

Finally, BGC Partners paid $25 million in fines after two of its interdealer brokers gave false information about foreign exchange options trades. The brokers said bids and offers were executable when they were not, and that nonexistent trades had occurred, in order to create an illusion of greater liquidity and tighter spreads in emerging market forex options. The activity was aimed at inducing clients to transact in these markets at times and prices at which they otherwise might not have.

One of the brokers discussed concealing this conduct from clients, and the other understood the conduct to be routine practice.

 

 

Spotlight: AML/CTF fines for Mozambique banks

Mozambique’s central bank has fined 18 banks for violating local laws on anti-money laundering and counter-terrorist financing between 2013 and 2019.

The fines, levied in October, totalled 172.9 million meticais ($2.7 million). Millennium Bim, a subsidiary of Portuguese Commercial Bank, faced the largest penalty: 76 million meticais. The bank failed to identify and verify customers, continuously monitor business relationships, maintain documents and immediately report suspicious transactions.

Other firms fined include Barclays, Societe Generale and Standard Bank. The remaining fines ranged from 100,000 meticais to 28 million meticais.

 

In Focus: Data breaches top $2bn since 2012

2019 marked the first year that data breaches rose to the top of op risk managers’ list of most pressing concerns – and from the numbers it is not hard to see why. The personal information of 106 million Capital One customers was compromised and shared online by hackers, the credit card company disclosed in July. The breach will cost up to $150 million in customer notifications, credit monitoring, technology costs and legal support, the firm estimates.

Non-cyber data breaches – where a firm’s IT security defences have not been penetrated – can also lead to significant losses. Canadian banking group Desjardins announced in June that an employee had used colleagues’ data access to steal the information of 4.2 million customers and subsequently shared the information outside of the firm. Two months after disclosing the breach, Desjardins said it had paid C$70 million ($52.4 million) on a credit monitoring plan and identity theft solution for its customers.

In all but two years since 2012, firms have experienced more cyber data breaches than non-cyber data breaches, according to ORX News data.

 

Provisions made by credit rating agency Equifax totalled $1.36 billion following a high-profile data breach, reported in 2017, affecting 160 million customers in North America and the UK. 

As well as costs relating to protecting customers and boosting IT security, financial institutions may also face regulatory fines and settlements following a breach. Of its total $1.36 billion provisions, Equifax set aside $690 million for legal costs. In the event, the firm reached settlements totalling $700 million with various US government agencies and faced a £500,000 ($652,000) fine from the UK Information Commissioner’s Office.

Between 2012 and 2019 financial institutions faced total losses of $2 billion for data breaches, of which $1 billion was fines and settlements, according to ORX News.

Editing by Alex Krohn

All information included in this report and held in ORX News comes from public sources only. It does not include any information from other services run by ORX, and we have not confirmed any of the information shown with any member of ORX.

While ORX endeavours to provide accurate, complete and up-to-date information, ORX makes no representation as to the accuracy, reliability or completeness of this information.

FBI sees steep rise in state-sponsored cyber theft

By Tom Osborn, Steve Marlin | News | 7 November 2019
US Department of Justice

Risk USA: Impact of US sanctions driving theft “to fund coffers”, says special agent

The impact of economic sanctions on rogue countries is helping to drive a dramatic rise in their sponsorship of sophisticated cyber attacks, with the goal of stealing funds to replenish national coffers, according to a senior agent in the US Federal Bureau of Investigation’s cyber security division.

Banks also say they fear that a rise in the severity of hacks resulting in successful thefts of north of $100 million per attack is also driving nation states to dramatically increase their sponsorship of cyber theft, in the hope of increasing returns.

Several key trends have contributed to the dramatic rise in cyber attacks, according to Richard Jacobs, assistant special agent in charge, counterintelligence cyber division at the FBI – theft for immediate financial gain borne of necessity principal among them.

“There are many countries – or a few, anyway – that are very strapped financially as a result of sanctions. And they are literally engaging in massive cyber crime similar to any financially motivated criminal: for money, and that is to fund their coffers. We’re dealing with a lot of very sophisticated actors conducting cyber crime on behalf of government entities for that purpose,” said Jacobs, who was giving a special address at Risk USA on November 6.

Describing the FBI as a partner of the private sector, he described the bureau’s mission as to “identify, pursue and defeat” cyber criminals intent on stealing, disrupting or exerting “malign influence”, with the ultimate goal of disrupting US economic pre-eminence.

Although most attacks still emanated from four countries, the network of threats to US firms tracked by the bureau was broadening, Jacobs added.

“We talk about the big four a lot – Russia, China, North Korea and Iran – but we are seeing a lot of other threats from up-and-coming countries, emerging countries that you don’t read about as much – places in South America, the Middle East and South-east Asia.”

Despite still being regularly cited as a significant operational risk, external theft and fraud generally lags behind banks’ top fears of a cyber attack leading to data theft or a disabling loss of operations.

But the sophistication of attacks was growing dramatically in tandem with the number of threat actors, warned Jacobs. He pointed to the recent example of a business email compromise attack, in which fraudulent payment instructions were made to look as if they were being issued by an executive within a firm.

“We saw one case come in six weeks ago for $95 million – and the company sent the money out,” said Jacobs, to audible intakes of breath among the audience. “We can all laugh at that, but the reality is it usually happens in three or in four stages. If you don’t have training and policies in place, this is happening. We were able to freeze about $70 million, but the company will likely take a loss of about $15 million.”

According to a recent analysis published by the bureau, some $26 billion has been stolen in this manner in the last six years – although that amount was certainly “significantly underreported”, he added: “Most victims never call us.”

But they should, said Jacobs: the FBI and its network of partner law enforcement agencies across the globe can usually act quickly to halt wire transfers and freeze funds – if they are given enough notice. Firms suffering an attack should first of all submit details to the bureau’s Internet Crime Complaint Center, or IC3. That activates a ‘kill chain’, with the bureau co-ordinating with the US Treasury and contacts overseas to try and get the assets frozen.

“The key point there for business continuity execs is, you have about a 48-hour window to report if – to be blunt – you’re to have any possibility of getting your money back. It’s highly unlikely afterwards,” he said.

Another trend the FBI has observed is a rise in the pervasiveness and sophistication of ransomware attacks in the last several years. These had evolved from being something the bureau could “easily defeat” to being all-pervasive – the WannaCry attacks being a prime example.

“How did we get here?” asked Jacobs. “Because many corporate victims are paying the ransom. We’re asked on a regular basis what our position is on paying ransoms: we don’t condone it. At the end for the day, it’s going to be a business decision: if you’re not prepared and can’t operate with those systems, you may not have a choice. The point being, by paying the ransom, you’re funding the very thing we’re trying to prevent, and making it a lot harder, and a lot more common among the criminals. It’s a very lucrative business.”

By paying the ransom, you’re funding the very thing we’re trying to prevent, and making it a lot harder, and a lot more common among the criminals. It’s a very lucrative business

Richard Jacobs, FBI

Banks agree they need to calibrate their level of cyber risk spending to the risks. Op risk models tend to focus on the historical amounts that have been lost, but often fail to take into account the sophistication of attacks that can be perpetrated by terrorists funded by nation states.

For example, a large-scale attack on cryptocurrencies capable of seamlessly extracting assets from the banking system could encourage perpetrators to dedicate far greater resources than defenders have planned for.

Operational risk losses tended to be biased towards the lower end of the loss spectrum, but that needed to be revised upwards in light of the growing sophistication and risks of a large-scale cyber attack, said William Moran, chief risk officer, technology and head of future risks at Bank of America, during an earlier panel debate at the conference.

“All of us in op risk have a clear view of how much money you could steal, which maxes around $100 million,” said Moran. “But if that $100 million goes to $500 million, you have to be concerned about the professionalism of the people doing it.”

That, in turn, was pushing nation states to increase their sponsorship of attacks, in the hope of speculating to accumulate, he said.

“The professionalism of the people doing these things has gone way up. How we think about the level of defence required is dependent on how much money can be stolen. They won’t spend $50 million to steal $100 million, but they might spend $50 million to steal $500 million.”