There is not a more fitting metaphor than Pandoras Jar1 in describing the entry of artificial intelligence (AI) into our society. Pandora’s “box” is a historical mistranslation;2 an inaccuracy. It was described as Panadoras Jar. The application of AI and machine learning in financial crime compliance, according to the Wolfsberg Principles,3 demands legitimacy in the purpose and integrity of data outputs. It is, therefore, apt to accurately refer to Pandora’s container of doom.
Furthermore, it was not that Pandora’s Jar contained the ingredients of doom and therefore should not have been opened that is the moral of the story, but rather the purpose for which it was opened. Pandora opened the jar out of curiosity rather than for the good of humanity, leading to the end of the golden age of humanity.4 Essentially, Pandora’s risk was not taken for a legitimate purpose.
The Wolfsberg Principles define the reason for opening the modern-day Pandora’s Jar that is AI as:
“FIs’ [financial institutions’] programmes to combat financial crimes are anchored in regulatory requirements, and a commitment to help safeguard the integrity of the financial system, while reaching fair and effective outcomes.”
Wolfsberg’s Five Principles: A Brief Overview
Principle One: Legitimate Purpose
The first Wolfsberg Principle refers to a legitimate purpose and presumably is a direct reference to Article 5(b) of the GDPR, which reads: “Personal data shall be: collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes……”5
Wolfsberg states, “FIs should implement a programme that validates the use and configuration of AI and machine learning regularly, which will help ensure that the use of data is proportionate to the legitimate, and intended, financial crimes compliance purpose.”6
The principle then advises that FI’s should follow a risk-based approach to the development and use of AI and machine learning solutions for financial crimes compliance.
Principle Two: Proportionate Use
FIs should balance “The benefits of use with appropriate management of the risks that may arise” from the use of AI and machine learning (ML) solutions in financial crime compliance. Wolfsberg’s proportionate use principle further states that FIs must weigh the “Severity of potential financial crimes risk” against “Any AI/ML solutions’ margin for error. FIs should implement a programme that validates the use and configuration of AI/ML regularly.”7
Principle Three: Design and Technical Expertise
The third principle recommends that the FIs obtain and possess the necessary expertise to control and thoroughly understand the “Implications, limitations, and consequences” of using AI and machine learning solutions for financial crimes compliance. Furthermore, the design of AI and machine learning systems should be developed according to a “Clear definition of the intended outcomes and ensure that results can be adequately explained or proven given the data inputs.” In short, Wolfberg recommends a thorough understanding of the AI and machine learning technology that is used and a well-defined reason for the use thereof accompanied.
FIs must also be able to prove and explain that the data generated was in line with the intended outcomes.
Under this principle, the focus is on expertise, control, understanding and constant monitoring of the technology to ensure that it is used for its intended purpose and outcomes.
Principle Four: Accountability and Oversight
Regardless if the AI and machine learning systems were developed in-house or sourced externally, Wolfsberg emphasizes FIs’ accountability for the use of the technology. Further emphasis is placed on staff training to ensure the appropriate use of AI and machine learning and to enable oversight of their design by technical teams with specific responsibility for the ethical use of data in AI and machine learning through existing risk or data management frameworks. Oversight is recommended to ensure the ethical use of data in AI and machine learning by developing and implementing risk or data management frameworks. Processes that would challenge technical teams and “Probe the use of data within their organisations”8 should be developed.
Principle Five: Openness and Transparency
Openness and transparency are the main points of Wolfberg’s fifth principle regarding “Their use of AI/ML, consistent with legal and regulatory requirements”9 without facilitating the “Evasion of the industry’s financial crime capabilities, or breach reporting confidentiality requirements and/or other data protection obligations inadvertently.” Engagement with all relevant role-players is recommended to achieve this principle.
Risk Based Assessment
Wolfsberg’s recommendation is a risk-based assessment for the use of AI and machine learning:
“The Principles should be operationalised by each FI according to a risk-based approach dependent on the prevailing and evolving regulatory landscape, as well as on its use of AI/ML against financial crime, and governed accordingly.”
Risk assessment is, therefore, the primary concern.
The Regtech Jar
The Financial Action Task Force (FATF) refers to regulatory technology (regtech) as the umbrella under which AI and machine learning technology fall. Regtech is a subset of financial technology and focuses on technologies that facilitate the delivery of regulatory requirements with better efficiency than existing procedures and legacy technology.12 Global bodies are clearly pushing AI and machine learning as the new go-to technology, and it is here to stay. The application of AI and machine learning in the anti-money laundering (AML) space is all about risk versus reward. Therefore, the most prudent question that first needs to be answered is this: What are the risks?
Wolfsberg’s third principle, “Design and Technical Expertise,” specifically addresses the capacity of FIs to thoroughly understands the implications, limitations and consequences of using AI and machine learning solutions. In other words, its risk.
It is arguable that understanding the implications, limitations and consequences of using AI and machine learning solutions in AML is at the core of its legal application. There is no other way to effectively gauge and implement the risk/reward equation.
The Risks of AI and Machine Learning
Effectively gauging the risks of AI and machine learning is a team effort. A deep and thorough understanding of the technologies, information technology law and regulatory requirements is needed for effective assessment. It is this combination of skill sets that poses the greatest risk and challenges to the industry to get it right. It is common sense that a lawyer cannot give scientific advice and a scientist cannot give legal advice. The legal threats posed by AI and machine learning can only be effectively gaged if the team has a thorough understanding of the tech and applicable law. Wolfsberg’s third principle reads:
“Teams involved in the creation, monitoring, and control of AI/ML should be composed of staff with the appropriate skills and diverse experiences needed to identify bias in the results.”13
Understanding AI and Machine Learning
FIs need a basic understanding of the technology. Science fiction and hype need to be safely tucked away to ensure the legal and realistic application of AI and machine learning technologies.
One of the core realities of machine learning is that the software operates on its own cognisance without human input or potentially even human oversight.
Autonomy can be said to be the ultimate goal of AI, although, that is not what defines it. The difference lies in the fact that direct human input (of information) could result in AI output while lacking autonomy. Similarly, AI and machine learning are also not the same thing.
"….artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method of training a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence."14
Utilizing AI means dependence and trust in the outputs of a computer program not under the direct control of a human. That is what you call a risk. FIs therefore need to understand this concept well.
Conclusion
In short, Wolfsberg’s first principle of “Legitimate Purpose” should be the reason for opening the AI and machine learning jar in conjunction with a deep and thorough understanding of the technology and its possible outcomes.
FIs should take note of the Wolfsberg Principles and understand that their criminal and civil liability depends on the legality of the output of AI software—a Pandoras jar.
Gideon Bouwer, information technology law attorney, COO of African Cyberlaw and Forensic Solutions, South Africa, gideon@aclf.co.za
- N.S. Gill, "Understanding the Significance of Pandora's Box," ThoughtCo., 27 August 2020, www.thoughtco.com/what-was-pandoras-box-118577
- In a later story, the jar contained blessings that would have preserved the golden age of humanity rather than destroying it.
- "Pandora," Encyclopedia Britannica, 5 December 2022, https://www.britannica.com/topic/Pandora-Greek-mythology
- “Wolfsberg Principles for Using Artificial Intelligence and Machine Learning in Financial Crime Compliance,” Wolfsberg, https://wolfsberg-group.org/news/34
- Britannica, The Editors of Encyclopaedia, "Pandora," Encyclopedia Britannica, 5 December 2022, https://www.britannica.com/topic/Pandora-Greek-mythology
- “Wolfsberg Principles for Using Artificial Intelligence and Machine Learning in Financial Crime Compliance,” Wolfsberg, https://wolfsberg-group.org/news/34
- Ibid.
- Ibid.
- Ibid.
- Ibid.
- Ibid.
- “Opportunities and Challenges of New Technologies for AML/CFT,” Financial Action Task Force, 2021, https://www.fatf-gafi.org/content/dam/fatf-gafi/guidance/Opportunities-Challenges-of-New-Technologies-for-AML-CFT.pdf.coredownload.pdf
- “Wolfsberg Principles for Using Artificial Intelligence and Machine Learning in Financial Crime Compliance,” Wolfsberg, https://wolfsberg-group.org/news/34
- B.J. Copeland, "Artificial Intelligence," Encyclopedia Britannica, 24 August 2022, https://www.britannica.com/technology/artificial-intelligence