Why Does the Computer Say No? Algorithm Transparency for AFC Systems

When the phrase ‘computer says no’ was first used in ‘Little Britain’, a British comedy show from over a decade ago, it resonated with many because it reflected the widespread use of algorithms in day-to-day decision-making. Carol, a character played by David Walliams, a bank worker and later holiday rep and hospital receptionist, always responded to a customer’s request by typing it into her computer and responding with ‘computer says no’, even to the most reasonable requests. 

So, what is algorithm transparency? In simple terms, an algorithm is a set of instructions to perform a task. Automated systems have been built with integrated algorithms, which have become increasingly complex over the years. However, these systems are still rule-based in many organisations. With the expansion of artificial intelligence (AI) and machine learning (ML) solutions, these rule-based systems became even more complex. The ability to correlate and explain outputs that may seem mythical in some cases has become a more difficult task. Yet these systems are viewed as the future of financial crime detection for good reasons. These algorithms measure many parameters at the same time and enable detection of complex patterns that were previously unknown. Organisations find the effectiveness and efficiency of AI and ML appealing and it complicates gaming the system for bad actors. Transparency is about understanding how these algorithms work, establishing that the output is an intended consequence of that input, and that it is fair and explainable.

Much has been written about algorithm transparency, or the lack of it, in the last few years. Increasingly little exists on the pitfalls and practical steps for organisations to take, especially when deploying the power of AI and ML in the fight against financial crime. Lack of transparency may mean that the organisation does not understand whether incorrect results could be produced by these systems and what can be done about this.

In November 2017, The Guardian covered the famous lawsuit by American teachers over a computer system assessed their performance.1 The system rated teachers in Houston by comparing their students’ test scores against state averages. Those who fared poorly faced the sack. What infuriated the teachers was that they had no way of checking whether the system was fair and if there were any errors. Because the algorithm was flawed in multiple ways, the teachers won the lawsuit and the school authority stopped using the software. This landmark case is an example of the need for algorithm transparency in systems used globally across different industries.

How does this apply to financial institutions (FIs) and anti-financial crime (AFC) systems? With increasing regulation, bad actors must constantly innovate. Regulation coupled with tighter market conditions have caused FIs to examine the costs of anti-financial crime compliance closely. Automation can provide benefits both in terms of reduced cost and increased effectiveness and FIs are adopting automation at a fast pace to fight financial crime. AI and ML are not theoretical concepts anymore; they are practical realities in many FIs. Rule-based mathematical models were already hard to understand, which made results hard to confirm. However, the increasing adoption of AI and ML may be making this task harder.

FIs will need to explain to regulators (and to customers) the rationale for their actions and decisions even if these actions and decisions were derived from their systems. However, FIs are increasingly using specialist vendors, outsourcing building financial crime prevention systems to experts and focusing on their own core competencies. This is further diluting FIs’ understanding of algorithms. Sometimes, vendors with a legitimate interest to protect their intellectual property tend to ‘black box’ their algorithms, which introduces further opaqueness to the overall environment. Is the answer for vendors to publish lines of code? Would it be necessary for a complex transaction monitoring system revealing every line of code in its entirety? Certainly not. There is a way to achieve transparency without creating overwhelming amounts of data while still protecting intellectual property.

Achieving Transparency

There are a few steps involved in achieving detailed understanding and transparency with respect to algorithms. These steps become doubly important as the scale of adoption of AI and ML increases further.

1: Understand and Confirm Technical Correctness

With any algorithm, there is no shortcut to confirming technical correctness; be it a vendor’s software or an internal build, the principle of accountability remains the same. The accountability for confirming technical correctness sits with the regulated FI. Therefore, going through a rigorous process to confirm it is key. Organisations have used focus groups of specialists from different fields such as a combination of an AFC specialist, a data modeller, a systems architect and a change delivery specialist. Having multiple focus groups working in parallel is beneficial as it provides a deeper understanding across the organisation’s landscape.

Anecdotes of a product’s success with other big institutions is not evidence of technical correctness and is certainly not a proxy for it. While it may be reassuring that the product is widely used and understood, technical nuances specific to the organisation must be defined. For example, a customer screening system that worked very well for one bank may end up producing spurious alerts in another bank because there is a larger number of customers with names in non-Latin characters (eg Arabic, Chinese or Cyrillic).

Even where the process of technical confirmation is carried out, FIs often fail the documentation hurdle. Workshops are held and in-depth discussions among project groups and task forces are put together in FIs. Documenting how technical correctness has been established is crucial and should follow from these discussions and workshops. Aside from helping disseminate knowledge across the organisation, documentation also helps with understanding the strengths and limitations of the algorithms in question. Finally, documentation is useful for internal audits and to satisfy regulators.

2: Testing and Integration

Performing some elements of white-box testing (where the internal system design is known to the tester) combined with elements of black-box testing (where the tester is indifferent to the system design and is instead focused on output), would provide transparency in addition to confirmation that the systems work as designed. Often with large systems, complexities emerge when differing pools of data from different source systems come together. Completing integration testing in steps applies as much to new AI solutions as it does to legacy rule-based systems.

Documenting test results in a clear and concise manner is key. One way organisations come to terms with the complexity is by creating several levels of documentation to suit the needs of different groups within organisations such as a board-level pack, a management-awareness-level pack, an education pack, a technical testing output documentation and an operational usage guide.

3: Statistical Modelling

In order to use algorithms safely, they must be statistically sound. Algorithms need to be tested for bias, under-representation of certain groups and over-representation of certain groups (these could be customer groups, specific transaction types or behaviour patterns). Statistical modelling is the process of applying statistical analysis to a data set. A statistical model is a mathematical model of observed data. This often includes using a sample to make an inference about the whole. Data science is used to perform statistical validation of algorithms. This step is crucial in understanding algorithm behaviour and avoiding unintended consequences.

In an article titled ‘A call for transparent AI’, ITU News, a UN organisation on information and communication technology, stated, ‘You can build a simpler and easier to explain AI model that approximates a more complex one. In that way, you can compare the outcome for particular cases and make sure that the more complex model makes sense.’2

This is one way to achieve a robust understanding. For example, an organisation with 20 million customers has a large data set that is challenging to manipulate. Choosing a representative sample of x where x is a small but statistically valid percentage of 20 million allows FIs to understand the impacts of decisions, manipulate the data and project forecasts.

4: Algorithmic Audits

In her book, Cathy O’Neil refers to algorithms with little to no transparency as ‘Weapons of Math Destruction’ (also the title of her book). O’Neil advocated measuring their impact and carrying out algorithmic audits. These audits may reveal that models are too simplistic or primitive, which may be counter-intuitive with the complexities discussed. At other times, models may need to be ‘dumbed’ down to achieve accuracy and statistical validity. In such instances, simplification may be the way to go.

One way of translating these results into an FI in the context of AFC is to introduce specific algorithmic audits for all key decision-making algorithms and repeat these audits at regular intervals. End to end, this could include onboarding systems, conducting customer risk assessment, AI and ML systems that may be used for screening, ongoing TM, identifying customers for periodic reviews and identifying customers to offboard.

Done well, these audits can add value and find new issues that testing and/or statistical modelling have completely overlooked at the stage of deployment. These audits achieve the purpose of understanding systems and models with different lenses. More importantly, they provide crucial evidence to either assure or bring issues to light once the algorithms are in operation.

Traceability and ‘Explainability’

The term ‘black box’ refers to the use of AI without understanding the algorithms and the process used to determine the outputs. At the end of the day, it is the FI’s obligation to explain why an event was detected, how facts support the detection and the differentiation of such events from other behaviours that were not detected. This means the detection should start with defining normality and not just finding anomalies. These definitions are a must for adhering to regulator expectations. FIs cannot just rely on a ‘tool’, as powerful it might be, and own the burden of explaining the process of decision-making.

Regulation, Legislation and Best Practice Guidance

With much attention given by investigative journalists globally on instances when the ‘computer says no’, regulators and legislators around the world have been active on the topic.

The European Parliament approved the resolution titled ‘Automated decision-making processes: ensuring consumer protection and the free movement of goods and services’ in February 2020. The resolution aims to protect consumers from incorrect or discriminatory use of automated systems and AI. This resolution states, ‘in order to assess whether products with automated decision-making capabilities are in conformity with the relevant safety rules, it is essential for the algorithms behind those capabilities to be adequately transparent, and to be explainable to market surveillance authorities.’3,4

In a landmark ruling in February 2020 on the Systemic Risk Indication (SyRI), the Dutch courts in the district of Hague ruled that the right to privacy prevails over alleged benefits in the fight against fraud. SyRI is a legal instrument that the government uses to combat fraud in areas such as benefits, surcharges and taxes. The court believes that the legislation governing the deployment of SyRI violates higher law. According to the court, the legislation does not comply with Article 8 of the European Convention on Human Rights (ECHR). Article 8 protects the right to respect for private life.5 While the ruling evokes wider ethical debate between the balance of privacy and the need to spot the bad actors, this case is interesting with regards to the lack of transparency of the algorithm.

The Dutch government’s SyRI is a risk calculation model developed over the past decade by the social affairs and employment ministry to predict the likelihood of an individual committing benefit or tax fraud or violating labour laws. SyRi gathers government data previously held in separate silos—such as employment, personal debt and benefit records, and education and housing histories—then analyses the data using a ‘secret’ algorithm to identify which individuals might be at higher risk of committing benefit fraud. The court ruled that the SyRi legislation contained insufficient safeguards against privacy intrusions and criticized a ‘serious lack of transparency’ about how it worked despite the state government’s arguments to the contrary.6

AFC systems within FIs need to be examined not just for conflict with privacy laws but to understand if they stand the test of scrutiny with respect to transparency. For example, risk models identifying high-risk clients need to be explainable and tested for bias—be it socioeconomic or regional and ethnic bias.

The Public Voice coalition,7 a non-governmental organisation, published 12 universal guidelines for AI. These guidelines state that an AI system should be deployed only after adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system. Institutions must also ensure the accuracy, reliability and validity of decisions. Overall, the set of 12 universal guidelines aims to promote transparency and accountability for these systems and to ensure people retain control of the systems they have built and deployed.

Conclusion

Media coverage on AI and ML ranges from euphoria to alarmist visions of machines taking over the world. The reality for most FIs is that increasing automation and adoption of AI and ML is a necessity for survival. These tools offer the opportunity to control the cost of compliance without compromising effectiveness—but they also come with complexity. FIs who understand their accountability and act with due care and skill will be the ones who will thrive.

Shilpa Arora, CAMS, AML director, ACAMS Europe, Middle East and Africa

Reviewers: Raj Ahya, financial crime subject-matter expert, ACAMS Europe, Middle East and Africa

Yaron Hazan, financial crime compliance expert, ThetaRay

  1. Ian Sample, “Computer says no: why making AIs fair, accountable and transparent is crucial,” The Guardian, 5 November 2017, https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fairaccountable-and-transparent-is-crucial
  2. Eva de Valk of De Graaf & De Valk, “A call for transparent AI: ‘Computer says no’ is not enough,” ITU News, 18 January 2019, https://news.itu.int/a-call-for-transparent-aicomputer-says-no-is-not-enough/
  3. “European Parliament approves resolution on automated decision making process,” UK Law Societies’ Joint Brussels Office, 18 February 2020, https://www.lawsocieties.eu/main-navigation/european-parliament-approves-resolutionon-automated-decision-makingprocess/6000786.article
  4. “European Parliament resolution of 12 February 2020 on automated decision-making processes: ensuring consumer protection and free movement of goods and services,” European Parliament, 12 February 2020, https://www.europarl.europa.eu/doceo/document/TA-9-2020-0032_EN.html
  5. “SyRI-wetgeving in strijd met het Europees Verdrag voor de Rechten voor de Mens,” de Rechtspraak, 5 February 2020, https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-wetgeving-in-strijd-met-het-Europees-Verdrag-voor-de-Rechten-voor-de-Mens.aspx
  6. Jon Henley and Robert Booth, “Welfare surveillance system violates human rights, Dutch court rules,” The Guardian, 5 February 2020, https://www.theguardian.com/technology/2020/feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules
  7. “Universal Guidelines for Artificial Intelligence,” The Public Voice, 23 October 2018, https://thepublicvoice.org/ai-universal-guidelines/

Leave a Reply