AI: The Double-Edged Sword in AML/CTF Compliance

Artificial intelligence (AI) has enhanced financial services, improved regulatory compliance and more. However, AI poses an equally dangerous threat to ethical governance, data protection and cybersecurity, as well as fundamental human rights.

There are over 70 proposed definitions for AI, one of which describes it as a suite of autonomous self-learning and adaptively predictive technology that enhances the ability to perform tasks.1

Several financial institutions (FIs) in South Africa have already experienced the transformative potential of some form of AI in their organisations, while others have applied a ‘wait-and-see’ strategy. As the value proposition of this technology gradually emerges, the risks, as well as the legal and governance challenges AI may cause (e.g. lack of accountability, transparency and potentially arbitrary and/or discriminatory results), should be considered.

The Need for a Different Approach

Year-on-year, a significant amount of market research is conducted, and white papers are published on the increasingly high cost of compliance, specifically, anti-money laundering (AML) and counter-terrorist financing (CTF) compliance. One of the main causes for this high cost is the level of manual, repetitive and data-intensive tasks. These tasks often lead to a low morale in the workforce, while the sophistication and volume of financial crime threats continue to increase.

The traditional way of tackling AML/CTF involved highly manual processes, reactive solutions to detect risk and did not provide a real time view, proving to be ineffective against modern financial crime threats. Therefore, compliance should innovate and adopt the latest technology, not only to drive efficiency and reduce cost, but more importantly, to identify new and creative ways to tackle financial crime effectively. The COVID-19 pandemic has been a catalyst for many FIs’ automation of compliance processes through robotic process automation (RPA) and machine learning.

The Potential of AI in AML/CTF Compliance

In recent years, FIs have been testing AI to assist analysts in highly repetitive AML/CTF compliance tasks and to improve the performance of AML/CTF controls and processes to fight crime. Key areas where AI and cognitive solutions are having an impact on current AML/CTF processes include automating data collection, enhancing the client risk scoring and the alert prioritisation processes, customer due diligence (CDD), leveraging linkage analysis, improving segmentation, and improving anomaly detection either through identifying known suspicious patterns or by discovering new patterns.

The number of digital transactions is growing at an estimated 12.7% annually. By 2022, an estimated 60% of global gross domestic products (GDP) will be digitised. In a digital context, traditional verification tools within CDD do not apply. AI is making a significant contribution to digital identity, including biometric technology, digital device identifiers, high definition scanners and high resolution video transmission (‘live’ remote identification and verification). There are 1.7 billion unbanked adults worldwide and 26% of them claim that lack of identification is the primary barrier to accessing financial services. Digital identification offers another important benefit: a robust digital identification for individuals without traditional identification to have access to financial services and improve financial inclusion.2

AI Challenges in AML/CTF Compliance

FIs must assess how AI may increase efficiencies in AML/CTF compliance processes while considering how AI will create or amplify new and existing threats. AI models must be designed and implemented responsibly and with transparency in mind to ensure its capabilities and limitations are clearly understood. The following are often cited as the most common challenges faced: a perceived threat of ‘redundancy’ of humans in favour of machines, bias and discrimination in decision-making and profiling of customers, cybersecurity data privacy and transparency. These challenges often entail the risk of regulatory non-compliance with the General Data Protection Regulation (GDPR) in Europe and the Protection of Personal Information Act (POPIA) in South Africa—especially the provisions related to automated decision-making, section 71(1) of POPIA.3

Current Regulations Governing ‘Ethical’ AI

Governments worldwide are investigating and implementing policies and strategies to support innovation in AI technologies. In May 2019, the Organisation for Economic Co-operation and Development (OECD) adopted its Principles on AI , the first international standards agreed by governments for the responsible stewardship of trustworthy AI. The OECD Principles on AI4 include concrete recommendations for public policy and strategy, that inter alia, advocate the use of AI that is innovative, trustworthy and respects human rights and democratic values. It further provides mechanisms for governments and policy makers to create a ‘human-centric’ approach to AI, which entails protecting the rule of law, human rights and democratic values throughout the AI lifecycle. The UK Information Commissioner’s Office (ICO) has recently produced guidance on AI and data protection touching on, among other things, the importance of ensuring human oversight of automated decision-making in circumstances where employees are making important decisions using significant AI tools.5 In addition, South Africa has a number of initiatives for assessing the opportunities and risks posed by AI, such as the Intergovernmental Fintech Working Group (IFWG) and the Centre for Artificial Intelligence Research.

Conclusion

It is hard to argue against technology that enhances financial crime detection and prevention. However, where such technology is applied in a manner that lacks transparency, reduces human accountability and infringes on fundamental human rights, it should be reassessed within an FI’s risk appetite framework. Regulators expect FIs to explain how the technology they use helps them make risk-based decisions, detect and prevent financial crime.

Successful implementation of AI in AML/CTF compliance requires commitment and collaboration across multiple stakeholders: FIs, vendors, regulators and government. Collaborative efforts can reinforce wider adoption, and identification of further benefits, but also set standards for appropriate governance and controls to manage the safe development and deployment of AI-enabled solutions.

The nature, level and applicability of AI regulation in AML/CTF compliance will need to balance various interests and considerations. An unregulated approach might have catastrophic consequences, while over-regulation may stifle innovation and prevent the detection and prevention of sophisticated financial crime schemes. With the right governance and guidelines in place, there is potential to use this technology not only to improve lives across the continent, but also to tackle financial crime in the modern age.6

Ilze Calitz, CAMS, chief legal officer, Monivation (Pty) Ltd, Sandton, Johannesburg, Gauteng, South Africa, ilze@monivation.co.za

  1. Jon Truby, Rafael Brown, Andrew Dahdal, “Banking on AI: mandating a proactive approach to AI regulation in the financial sector,” Taylor and Francis Online, https://www.tandfonline.com/doi/full/10.1080/17521440.2020.1760454
  2. “Guidance on Digital Identity,” FATF , March 2020, http://www.fatf-gafi.org/publications/fatfrecommendations/documents/digital-identity-guidance.html
  3. Section 71(1) in POPIA provides individuals (‘data subjects’) the fundamental right in relation to automated decision-making, which relates to automated decisions taken without human oversight or intervention. An example would be where a bank creates a ‘profile’ through automated processing of personal information. The loan application is automatically declined based on adverse credit history. The creation of a profile is potentially damaging, and it is for this reason that a decision, which affects the data subject substantially and is based solely on automated processing of personal information, is prohibited. Article 22 of the GDPR prohibits automated decision-making, only if the decision is based solely on automated processing and produces legal effects concerning the data subject or significantly affects them. However, it is allowed if the process is done with the data subject’s explicit consent or the controller has sufficient safeguards in place. It is important to note that both POPIA and GDPR do not prevent the use of analytics in decision-making or research as such, but provide for certain duties and restrictions, which could relate to the de‑identification of personal information, among other.
  4. “Recommendations of the Council on Artificial Intelligence,” OECD Legal Instruments , 21 May 2019, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
  5. “Guidance on AI and data protection,” International Commissioner’s Office, https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-artificial-intelligence-and-data-protection/
  6. “Fintech Workshops,” Intergovernmental Fintech Working Group , September 2019, http://www.treasury.gov.za/comm_media/press/2020/IFWG_2019WorkshopsReport_v1.0.pdf

Leave a Reply