Artificial intelligence in tandem with human analysis seen as effective for know-your-customer
It pays to know your customers. Just ask US Bank, smarting from a $600 million fine for anti-money laundering violations in February. Or Deutsche Bank, fined $630 million for similar failings last year.
Authorities around the world are keen for lenders to develop strong know-your-customer (KYC) procedures, and are prepared to levy big penalties on banks that fall short.
As a result, firms are looking to new data-based technologies to sharpen up their compliance and help avoid regulatory punishments. Machine learning and artificial intelligence figure prominently in this push.
But any visions of a future where compliance departments are staffed solely by robots are premature. Human intervention still plays a big role in transaction monitoring, and banks should be wary of depending too heavily on smart algos.
“AIs can pull out anomalies in our customer dataset, which analysts can then use to apply common sense and assess the level of risk,” says Frederick Reynolds, global head of financial crime legal at Barclays. “AIs are not good at applying common sense so analysts provide an extra level of input for AI models.”
As an example, a financial crime head at a UK bank cites a fraud algorithm programmed to flag all transactions in excess of £10,000 for additional checks. It was only when an analyst reviewed the data and discovered a series of transactions at £9,999 that the bank was able to tackle this flaw in its control processes.
“Although technology is good at automated search and data analytics, there is always room for human interpretation and intuition,” says Tony Wicks, head of screening and fraud detection at Swift, a financial messaging service. “The ultimate decision point will always sit with a human. Technology is there to help and support human decisions, not to replace them.”
Regulators agree. Under Europe’s new data protection regulation, known as GDPR, firms that use data to make decisions on, say, lending must build in an element of human judgement.
Article 22 of GDPR states: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” The regulation leaves the type and extent of human intervention to the discretion of businesses.
Regulations elsewhere aim to tighten procedures regarding AML, or anti-money laundering. In the US, new rules issued by the Treasury department’s Financial Crimes Enforcement Network address AML processes within banks and close loopholes in disclosure of beneficial ownership.
In Europe, beneficial ownership is a large part of the Fifth Anti-Money Laundering Directive, ratified by lawmakers in April. The regime will force banks to establish the identity of company owners and account holders. The rule changes are, in part, a response to the “Panama Papers” leaks, which exposed a global network of illegal funding.
The directive also instructs national authorities to set up centralised registers of bank accounts. These utilities will be interconnected across the bloc, enabling investigators to probe suspicious accounts more easily. Member states have 18 months to transpose the rules into local law.
Centralised information may aid other aspects of the KYC effort, too. A report from the European Supervisory Authorities in January highlights the use of “central identity document repositories” as one solution for customer due diligence. The report states these repositories “aim to streamline the collection and exchange of [customer due diligence] data and documentation between participating firms and their customers, thereby avoiding the same information being requested repeatedly from the same customer”.
Innovative technologies should not replicate human-based systems but should seek to find new solutions that change how we do business, not just make it faster
Frederick Reynolds, Barclays
As well as the threat of regulatory fines for AML failures (see box: Paying the penalty), banks face other losses from financial crime: primarily, the direct monetary cost of the crimes themselves. For example, Bangladesh Bank suffered an $81 million loss in 2016 when its computer network was hacked and criminals placed fraudulent transfer requests via the Swift network.
Some banks use insurance to mitigate losses, but exclusions can leave gaps in coverage. Risk managers are sceptical that insurance against cyber attacks is effective for the size and scale of losses involved.
Reputational damage and loss of business can also hit a bank financially. Banks susceptible to financial fraud, or whose remedial action is inadequate, risk losing business. UK bank TSB has been forced into costly measures to dissuade customers from closing their accounts following persistent IT problems which have left the bank an easy target for fraudsters. Banking association UK Finance received nearly 2 million reported cases of unauthorised financial fraud in 2017, up 6% year on year.
In-house or out
Banks are adopting different strategies to mitigate the risk of financial crime. Larger organisations with complex operations are developing in-house procedures, while smaller firms may find an external, off-the-shelf solution to be more cost-effective.
HSBC announced in April that it will use AI technology developed by UK data analytics firm Quantexa to support its anti-money laundering processes, following a successful pilot carried out in 2017. Royal Bank of Scotland and Vocalink, a payments business, have partnered to create a system to scan transactions by small and large business customers to identify false invoices and potential instances of fraud.
Regulators have a part to play in the development of fintech solutions. The UK’s Financial Conduct Authority created a regulatory “sandbox” in 2017, inviting banks and vendors to test tech applications. Of the first cohort of firms, 90% are now moving towards a wider market launch, with at least 40% of these having received investment during or after their sandbox testing phase.
Gemma Rogers, co-founder and director of Fintrail, a financial crime risk management firm, sees scope for new technology at most stages of the client lifecycle. “This can range from onboarding clients through easily integrated systems, such as using e-identification verification checks and incorporating new tools to carry out background checks for KYC. Other tools can provide ongoing reviews and background checks on existing clients,” she says.
Better use of data will improve efficiencies, helping to reduce the identification of false positives, vendors hope. A 2017 report by a UK think-tank estimated that 80–90% of suspicious activity reports (SARs) are of no immediate value to active law enforcement investigations, despite the time and effort spent in raising them. The same report estimated that the global private sector spent $8.2 billion on AML controls in 2017.
This indicates a need for more targeted AML systems, both in terms of time and cost. Wicks from Swift estimates that compliance processes can comprise between 10–15% of a bank’s costs, while analysts can spend 80% of their time looking for the right data and only 20% of their time actively analysing it.
It is not possible to outsource risk, so even small firms that have bought in compliance tools have to make sure they understand them and have put the proper controls in place
Gemma Rogers, Fintrail
“By improving the quality of AML and fraud detection systems, they can be more effective, reducing the noise, ensuring more of the right kinds of financial crime risks are identified, resulting in an increased number of SARs that provide real value and can be acted upon,” Wicks says.
The challenge is identifying what to measure. Static classifications of risk, also known as typologies, can quickly become out of date. Reynolds at Barclays says: “Innovative technologies should not replicate human-based systems but should seek to find new solutions that change how we do business, not just make it faster. We should not be looking at typologies, which can be based on cases and networks identified several years ago, but at anomalies.”
Rogers of Fintrail agrees that risk identification must be flexible: “Fintechs tools may use AI or machine learning to create models or typologies of financial crime risk which can evolve over time and are applied holistically, allowing users to more proactively assess transactions and potential financial crime risks.”
This flexibility extends to how different countries interpret compliance regulations. European Union member states have transposed the EU’s money-laundering directive into national law according to their localised needs. Their technological requirements will vary accordingly.
Endija Springe and Carolin Gardner, AML policy experts at the European Banking Authority, explain that Germany’s KYC process relies heavily on face-to-face verification. This method may be inconvenient for some customers, or difficult to reconcile with some business models. One proposed solution is to use video conferencing to meet that face-to-face requirement.
Other countries may require a different emphasis on customer due diligence. For example, in regions where identification and verification processes are already efficient, firms might focus on developing new technology for transaction monitoring.
Right tool, right job
New technologies are not without risk, though. The European Supervisory Authorities’ report states: “Innovation in this field, if ill understood or badly applied, may weaken firms’ money laundering and terrorist financing safeguards and subsequently, undermine the integrity of the markets in which they operate.”
Firms must constantly monitor and update new systems to make sure they accurately reflect financial crime threats, especially in cases where compliance systems have been outsourced to external providers. Moreover, staff must be fully trained on how to use these tools and the significance of the tools’ findings, otherwise firms risk missing financial crime flags.
Rogers says: “It is not possible to outsource risk, so even small firms that have bought in compliance tools have to make sure they understand them and have put the proper controls in place. This is something that regulators and auditors will be assessing.”
Reynolds believes that extensive testing and consultation can mitigate potential risks involved in the transition to new technological solutions. “Some trials of new systems may fail in internal testing but this is healthy, it allows firms to build on that failing and find new ways that target risk more effectively and produce a better system in the end,” he says.
It is not just supervisors that need to be convinced about the suitability of compliance systems; internal boards must sign off on new compliance tools and will be careful to ensure that any risk is managed.
An effective, up-to-date compliance process must also inspire confidence among fellow banks. Wicks says: “It is all about transparency now. Banks interacting with other banks want to know that they have good anti-money laundering processes in place and that they can trust you.”
Technology is there to help and support human decisions, not to replace them
Tony Wicks, Swift
Mutual trust could foster further benefits, such as greater co-operation between firms in developing new solutions to financial crime risk management. “AI is limited to the data pool on which it can draw, such as the Barclays customer base,” Reynolds says. “Allowing an AI to work across the datasets of multiple banks, without actually sharing data due to data sharing and privacy restrictions, would allow the AI to learn faster. This would result in much better identification of anomalous activity for all banks than they could achieve alone.”
Firms are already using data-pooling initiatives to comply with new market risk requirements known as FRTB. Here, banks must separately capitalise risk factors that lack suitable pricing data, incurring additional costs. Vendors are touting data utilities to lessen these capital add-ons; banks are even considering teaming up to form their own pools.
But data-sharing initiatives between major banks require close co-ordination with regulators, especially in cross-jurisdictional instances, which can be problematic even within individual banks. The Financial Conduct Authority is considering launching a second, global sandbox that would allow firms to test new technological solutions across different jurisdictions before going to market. The FCA invited comments or suggestions on the topic by March 2018 and is currently evaluating the responses. No details on the global sandbox are available as yet, but suggestions proposed by the FCA include understanding AML/KYC compliance and onboarding; supporting specific firms aiming to launch in multiple jurisdictions, and addressing global policy and regulatory challenges, potentially in co-ordination with regulatory bodies from other jurisdictions.
“Financial crime is global, not jurisdictional,” says Reynolds. “Criminals are jurisdiction-neutral and maintaining jurisdictional boundaries gives them an advantage. Our industry is not always competitive. We want to share innovation in financial crime technology as it helps to make us all safer.”
This trend towards “consolidation and connectivity”, as Reynolds terms it, has already encouraged vendors to wrap more products into their offerings, such as incorporating negative news web trawls into machine learning.
The growth of new technological opportunities will allow experimentation across small and large firms. Wicks says: “Smaller firms can be more agile in adopting new approaches and using technology to protect and enable their business.”
On the other hand, larger businesses and banks have more financial resources to invest in refining existing systems and developing new solutions. Such firms have a strong incentive to keep up the momentum in technological advances in order to improve financial crime safeguards and mitigate the potential for major failures that could lead to large fines.
Paying the penalty
Since 2017, banks have paid a series of hefty sums to regulators for anti-money laundering deficiencies, according to data from ORX News. In top place is the $651 million that money transfer firm Western Union paid to US authorities for failing to prevent AML breaches, including sending funds to human traffickers in China. Deutsche Bank handed over $630 million to US and UK authorities over allegations it facilitated mirror trades that enabled over $10 billion to be moved out of Russia. Close behind is the $613 million that US Bank coughed up for various mis-steps, including failure to report suspicious activities of a customer who ran a billion-dollar fraudulent payday lending scheme.
Editing by Alex Krohn