Why ChatGPT Is Unlikely to Replace the Human Factor in AML

Why ChatGPT is Unlikely to Replace the Human Factor in AML

Artificial intelligence (AI) describes the development of computer systems that can perform tasks typically requiring human intelligence, such as learning, reasoning and problem-solving. AI has already been used across a multitude of industries1 to improve daily life. It has been implemented in the health care industry to carry out more accurate diagnoses. In the food industry, AI develops methods to raise more perfect crops. In the banking industry, AI quickly processes loan applications.

As AI continues to develop into a powerful and accessible tool, society benefits from its wide-ranging applications. Since criminals―particularly financial criminals such as fraudsters, money launderers and terrorist financiers―tend to be early adopters of new technology, the work we all do in anti-money laundering/counter-terrorist financing (AML/CTF) has just become more difficult. Examples of technology being adopted by criminals range from using the dark web to buy and sell illegal goods,2 to email phishing scams and recent internet-based pig butchering scams.3

ChatGPT and AI Overview

One such example of an accessible AI application is ChatGPT, a program created by OpenAI. ChatGPT is one in a series of natural language models4 deployed by OpenAI. For a layperson, language models might be comparable to a supercharged autocorrect engine.5 That is to say, ChatGPT does not really know anything, but it is able to relay a series of words that are generally associated with each other.

Most significantly for all of us in the compliance sector, ChatGPT does not check facts. To the contrary, at the bottom of the ChatGPT prompt screen, there is this disclaimer: “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts.” Information provided by ChatGPT must be independently verified to be trusted. Overreliance on the accuracy of ChatGPT has already caused problems for legal professionals. One New York attorney was fined for submitting a legal brief that cited fictitious cases6 compiled by ChatGPT, which has been known to invent cases or facts to cases.7

The authors asked ChatGPT about itself and found that someone or something, most likely a human, wrote what amounts to a disclaimer so broad as to reduce or eliminate the use for AI. In part, it reads, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”8 This disclaimer should be posted whenever text provided by ChatGPT is used, quoted or referenced in any form.

In short, ChatGPT uses machine learning algorithms to process and analyze data and generate responses to users’ inquiries that are accessible, friendly and often couched in the first person, i.e., “Yes, I can”9 or “However, I should note.”10 This means that when users type a question into ChatGPT, it produces a conversational answer.

For example, when prompted to “Describe how to use ChatGPT to prevent money laundering schemes in the United States,” ChatGPT provided a list of ways it could be used to support a comprehensive AML program or regulatory compliance: “ChatGPT can aid in preventing money laundering schemes in the United States by providing real-time analysis of financial transactions, identifying suspicious patterns or activities, and offering guidance on relevant regulations and compliance measures. By integrating GPT-3.5 into financial monitoring systems, it can help identify red flags, potential money laundering activities, and suggest appropriate reporting steps to relevant authorities. Additionally, it can provide training and educational resources to financial professionals to enhance their understanding of evolving money laundering tactics and regulatory updates, thus contributing to a proactive approach in combating financial crimes.”11

Although it can be a helpful tool, ChatGPT will tell you that it is not a substitute for a comprehensive AML program, nor can it replace the human element.

ChatGPT’s Limitations

ChatGPT has flaws: For one, ChatGPT is only as good as the information it has been fed. It is the old story of garbage in, garbage out since ChatGPT is limited by the veracity of its source data. In addition, it reflects the views back to us that it picks up from its source data, but not the internet at large. Unlike search engines like Google that have access to the entire internet, ChatGPT in its present form is limited to the source data available at the time it was created (e.g., data available up until September 2021). Further, AI is a reflection of societal knowledge and, therefore, is susceptible to the same things to which people are susceptible, such as sexism,12 racial bias and political polarization. When you Google something, you are given multiple sources and it is up to you to trace what you want to use, from where the information is coming and who wrote it. When you ask ChatGPT something, you do not necessarily know who wrote it, let alone whether the information provided is accurate. This leads back to a flaw discussed above: ChatGPT has been known to make stuff up.

Effect on the AML Field and on Criminals

ChatGPT can provide accurate answers to a variety of AML and Bank Secrecy Act (BSA)-related issues. Below are questions asked to ChatGPT and the range of responses vary in accuracy.

Example: In response to being asked, “What is the definition of a currency transaction report?” the following appeared: “A Currency Transaction Report (CTR) is a financial document required by the Financial Crimes Enforcement Network (FinCEN) in the United States, typically completed by financial institutions, which details any transactions involving cash amounts over a certain threshold (currently $10,000) within a single business day. The report includes information about the transaction, the parties involved, and the purpose of the transaction, aiding in the detection and prevention of money laundering and other illicit activities.”13

Although it can be a helpful tool, ChatGPT will tell you that it is not a substitute for a comprehensive AML program, nor can it replace the human element

Analysis: The response is pretty accurate, concise and helpful, and most importantly, it makes clear that the threshold for U.S. CTRs is over $10,000, not $10,000 or below, as many folks believe.

Example: Contrast the previous example to the response we received to the question: “What does section 314(b) of the USA PATRIOT Act say?”

“Section 314(b) of the USA PATRIOT Act encourages collaboration and information sharing among financial institutions, specifically with regard to detecting and preventing money laundering and terrorist financing. The section enables the sharing of information regarding individuals or organizations that may be involved in such illicit activities.

Here is a summary of the key points from Section 314(b) of the USA PATRIOT Act:

  1. Information Sharing Authorization: Financial institutions are authorized to share information with one another or with associations of financial institutions regarding individuals, entities, organizations, or countries suspected of involvement in money laundering or terrorist activities.
  2. Voluntary Sharing: The information sharing under Section 314(b) is voluntary, meaning that financial institutions are not mandated to share information, but they are encouraged to do so to enhance the collective effort to combat financial crimes.
  3. Protection from Liability: Financial institutions that voluntarily share information in accordance with the provisions of Section 314(b) are protected from liability for such sharing and are deemed to be in compliance with privacy and confidentiality obligations.
  4. Purpose of Sharing: The sharing of information is intended to help financial institutions identify and report potential money laundering or terrorist financing activities more effectively. By sharing insights and suspicions, institutions can collectively strengthen their ability to detect and prevent financial crimes.
  5. Privacy and Confidentiality Considerations: While promoting information sharing, financial institutions must still maintain the privacy and confidentiality of shared information in accordance with applicable laws and regulations.

Section 314(b) was enacted to facilitate cooperation and collaboration among financial institutions, thereby improving the collective ability to identify and respond to suspicious activities related to money laundering and terrorist financing. It provides a framework for sharing relevant information while offering liability protections for those participating in good faith.”14

Analysis: The response is seemingly accurate, not particularly concise, and missing at least two key points: First, in order for a financial institution (FI) to participate, the FI must first register with FinCEN15 and do so annually. There is no reference to this requirement in the ChatGPT formulation. Second, the FI may only share information with another FI registered with FinCEN.16 In either case, failure to register or exchange information with a registered FI will cause the FI providing information to lose the safe harbor provided for in 314(b), which is one of the essential elements of that section.

One day, AI, like ChatGPT, may make it easier for criminals to conduct illegal activities. Criminals could create their own AI to simultaneously run thousands of scams at once or even bypass security and safety mechanisms put in place by organizations like OpenAI to prevent misuse of their proprietary systems or existing AI programs.

Takeaways

  1. You can use ChatGPT as a starting point but cannot rely on its accuracy. If you do not believe us, just ask ChatGPT! When asked, “Can I rely on the answers ChatGPT provides?” ChatGPT responded: “While ChatGPT can offer helpful information and suggestions, it’s crucial to exercise caution and verify the accuracy and reliability of the answers provided, especially for critical or important decisions. Always cross-reference the information with credible sources and consult professionals when needed to ensure accuracy and appropriateness for your specific situation.”17
  2. Google, Bing, Yahoo! and other search engines are not only useful but are required.
  3. The absence of source documentation is a serious impediment. Although ChatGPT may provide sources upon request, when it does provide sources, they must still be verified because ChatGPT may provide nonsensical answers.18
  4. As a training tool, the current version is sadly lacking: The Federal Financial Institutions Examination Council’s “BSA/AML Examination Manual”19 remains not only a useful tool but an authoritative source―even if it is wrong, if it is in the manual, your regulatory interlocutors will be hard-pressed to challenge you.
  5. Criminals will like ChatGPT’s simplicity and ease of use. They are not known for checking source documents or looking for citations, so there is no impediment there. But that also means they may get it wrong, which is a great help to those of us in the AML/CTF field.
  6. Even if all of the above tells you nothing you do not already know, we hope you will agree on one thing: Human intervention will be a necessary part of any effective compliance framework for some time to come. Unlike elevator operators, phone operators, parking lot attendants and other jobs requiring mechanistic responses, evaluating risk factors and red flags; detecting, investigating and analyzing suspicious activity; as well as advising the business line on the risks of doing what they always want to do (onboard, maintain, protect and defend the client) will still be a fundamental part of compliance and require the complex logic, reasoning and intuitive thinking that only humans are capable of, at least for now.
  7. Finally, the AI landscape is changing so quickly that everything we have written about it will be outdated eventually, and some of it by the time this article is published.

Amanda Bini, legal research consultant, Law Office of Ross Delston

Ross Delston, CAMS, independent American attorney and expert witness, Law Office of Ross Delston

The authors wish to acknowledge the assistance provided by Dr. Neha Singh, AI/machine learning, natural language processing and data science expert.

  1. Akash Takyar, “AI Use Cases & Applications Across Major Industries,” LeewayHertzhttps://www.leewayhertz.com/ai-use-cases-and-applications/
  2. “Dark Web Crimes,” FindLaw, https://www.findlaw.com/criminal/criminal-charges/dark-web-crimes.html
  3. “‘Pig Butchering’ Scams: What They Are and How to Avoid Them,” FINRA, December 13, 2022, https://www.finra.org/
    investors/insights/pig-butchering-scams
  4. Miles Brundage, Katie Mayer, Tyna Eloundou, et al., “Lessons learned on language model safety and misuse,” OpenAI, March 3, 2022, https://openai.com/research/language-model-safety-and-misuse
  5. David Nield, “How ChatGPT and Other LLMs Work―and Where They Could Go Next,” Wired, April 30, 2023, https://www.
    wired.com/story/how-chatgpt-works-large-language-model/
  6. Sara Merken, “New York lawyers sanctioned for using fake ChatGPT cases in legal brief,” Reuters, June 26, 2023, https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/
  7. James Vincent, “OpenAI sued for defamation after ChatGPT fabricates legal accusations against radio host,” The Verge, June 9, 2023, https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit
  8. “Introducing ChatGPT,” OpenAI, November 30, 2022, https://openai.com/blog/chatgpt
  9. This information was obtained directly from ChatGPT. It should be noted that ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Please view the following source for more information: “Introducing ChatGPT,” OpenAI, November 30, 2022, https://openai.com/blog/chatgpt
  10. Ibid.
  11. Ibid.
  12. Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
  13. This information was obtained directly from ChatGPT. It should be noted that ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Please view the following source for more information: “Introducing ChatGPT,” OpenAI, November 30, 2022, https://openai.com/blog/chatgpt
  14. Ibid.
  15. “Section 314(b) Fact Sheet,” Financial Crimes Enforcement Network, December 2020, https://www.fincen.gov/sites/default/
    files/shared/314bfactsheet.pdf
  16. Ibid.
  17. This information was obtained directly from ChatGPT. It should be noted that ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Please view the following source for more information: “Introducing ChatGPT,” OpenAI, November 30, 2022, https://openai.com/blog/chatgpt
  18. Ibid.
  19. “BSA/AML Examination Manual,” Federal Financial Institutions Examination Councilhttps://bsaaml.ffiec.gov/manual

Leave a Reply