A risk assessment is an important process that many consider critical to ensuring the health of a Bank Secrecy Act (BSA) program. As BSA officers continue to enhance the rigor behind their risk assessment process, regulators and practitioners are beginning to ask if risk assessments should be classified as models and thus held to the standards found in OCC’s 2011-12 Model Risk Bulletin (OCC 2011-12 Bulletin).1 Concerned about being non-compliant with the OCC 2011-12 Bulletin, anti-money laundering (AML) compliance programs are feeling pressured to classify the risk assessment process as a model and subject it to the rigors within the Bulletin. Before rushing to this conclusion, however, financial institutions should pause and ask themselves: What is the model risk posed by the risk assessment? While a risk assessment can have an important impact on the allocation of compliance resources and may ultimately be consumed by downstream users, this does not mean it should be categorized as a model across every financial institution. This article explores the purpose of a risk assessment and provides support for not applying the full rigor of the OCC 2011-12 Bulletin to risk assessments, but instead ensuring that proper internal controls are in place. Suggestions of which risk management practices to apply to the Risk Assessment program are also found below.
What is the risk assessment process?
A risk assessment process is undoubtedly critical and is generally considered to be the foundation of an effective AML compliance program. It is the mechanism financial institutions use to evaluate the current state of the AML compliance program, including its risks, controls and potential gaps in coverage. A financial institution's level of inherent risk is measured across products, services, customers and geographies. Together, the level of risk found in those categories helps to rank various business segments relative to one another and provides an indication of which business segments are more susceptible to money laundering and the Office of Foreign Assets Control (OFAC) risk. The idea is for a financial institution to apply greater mitigating controls around the riskier components identified through the risk assessment process to effectively reduce its residual risk to the institution. While the FFIEC Exam Manual outlines the expected approach to executing the risk assessment program, it is important to note that the risk assessment process detailed within the Manual is fundamentally qualitative in nature.2
Risk assessments are typically conducted once every 12 to 18 months, include various data inputs and are based on subjective fact gathering by subject-matter experts. Risk assessment data is collected from multiple sources. Some financial institutions have this data aggregation step automated and others collect the data across the firm and then place it all manually in a spreadsheet or database. All institutions face the challenge of "normalizing" the data collected. Normalization simply means aggregating the data across business units to make it all standard or uniform to facilitate a comparison of "apples to apples” and “oranges to oranges." While this process facilitates comparison, it does not fundamentally alter the data.
After normalizing the data, the next step is to rank the business units from high to low risk. There are a handful of ways to rank data; for example, Excel has a simple arithmetic function (=RANK) and a button within the spreadsheet ribbon that automatically sorts lists ascending/descending order. Once the variables are normalized and ranked, the next step is to determine the cutoff points that are indicative of varying levels of risk. For example, if business units are ranked 1-9, some financial institution’s may choose to report that risk numerically (e.g., 1, 2, 3, etc.), while others may decide to aggregate or “bucket” similar risks using words like high/medium/low. The cutoff in the latter situation might look like: 1-3 low risk, 4-6 medium risk, and 7-9 high risk. Alternatively, to aggregate or bucket similar risks together, one could use basic statistical methods, such as factor analysis, decision tree analysis, or weighted distance-based analysis. Basic statistical analysis can bring further precision to the cutoff points, but are not transforming any of the data to inform any new output. And ultimately, the cutoff points are still informed by the subject-matter expert’s subjective assessment of which attributes are indicative of greater risk. Once business units are categorized in some formation that is indicative of risk, the risk assessment is ready for management consideration in future decision-making processes (e.g., What new controls need to be added to mitigate risk? What typologies are missing that would help mitigate our risk?).
In the process described above, the risk assessment is not modeling anything, it is not manipulating variables to inform anything from the past and it is not predicting anything to inform the future. In contrast to a quantitatively transformative process, for example, loan default rate modeling is predicting the likelihood that a bank loan will default based on historical data. A loan default rate model uses both quantitatively transformative calculations (predictive algorithms) and produces quantitatively estimated output (default rates). Even if a risk assessment process utilizes basic statistical methods, like factor analysis, the process is still not quantitatively transforming the existing variables into new output. The risk assessment is simply a mirror—it is a reflection of today's perceived risk environment.
What is a model?
The OCC 2011-12 Bulletin defines a model as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques and assumptions to process input data into quantitative estimates.”3
The key descriptor in this definition is the word “quantitative,” which is the critical component to classifying something as a model. Models are inherently quantitative whereas risk assessments are not. To restate the definition, providing additional emphasis, a model is a quantitative method, quantitative system, or quantitative approach transforming data inputs (whether quantitative inputs or qualitative inputs) into quantitatively-estimated outputs.
The OCC 2011-12 Bulletin was written as guidance for financial institutions to understand and mitigate the risk resulting from fundamental errors and potential inaccuracies of model output. The Bulletin is a method of standardizing modeling practices to mitigate model risk across the industry. The model definition in the OCC 2011-12 Bulletin is broad; but in the art of policy writing, one must write the language broadly enough to encapsulate the spirit of a rule. Not every spreadsheet and business process was intended to be classified as a financial model. In the AML compliance context, models are best understood as those tools with quantitative or qualitative data inputs that are mathematically/statistically transformed into a quantitative, estimated output. This quantitative transformation does not occur in many places in AML compliance.4 An example specific to an AML compliance program of quantitative transformation is when conducting threshold optimization, one does quantitatively transform the inputs and project a quantitative output.
To classify a model more directly in the context of AML compliance, one could consider writing a definition that incorporates the following core considerations:
- Complexity – the model must include a quantitatively complex algorithm that provides a mathematical or statistical representation of information;
- Quantitatively transformative – the model should result in output that is an estimate of uncertain values, the accuracy of which depends upon assumptions, the quality of inputs, and the precision brought to the modeling process; and
- Usage – the model should be used repetitively and in support of business activity.
The risk assessment does not meet the above criterion. The risk assessment process does not transform the inputs, does not use quantitatively transformative calculations and does not result in quantitatively estimated outputs. There are no complex algorithms manipulating the data to solve for some unknown/uncertain output. Instead, a risk assessment process is characterized with the following:
- Not complex – the risk assessment process includes simple decision rules (if/then statements) that employ simple, rule-based queries (including subjective scoring) to risk rank characteristics.
- Not quantitatively transformative – the risk assessment process uses simple aggregation of data (e.g., arithmetic, where outcomes are certain) to produce results. Its output does not result in quantitatively estimated outputs.
- Usage – the risk assessment process is conducted once every 12 to 18 months. One could argue that this frequency does not qualify as “repetitive use.”
Conclusion – What is the model risk?
What is the model risk of a risk assessment process? If the process is not modeling anything, then by logical extension there is no model risk. If there is no model risk, then the risk assessment would not benefit from the additional rigor of the OCC 2011-12 Bulletin. However, deeming a risk assessment a non-model is not an excuse to ignore sound internal controls to help ensure the integrity of the process. If the risk assessment process is to serve as the foundation to an AML compliance program, management should ensure internal controls are established to protect that process. Key risk mitigation practices can be gleamed from the OCC 2011-12 Bulletin and included in the internal control framework. For example, a risk assessment program would benefit from many of the documentation requirements set forth in the Bulletin, specifically thoroughly documenting the risk assessment’s methodology and purpose. To the extent applicable, the data quality controls listed in the Bulletin would also benefit the risk assessment process. But the process of instituting such controls does not transform the risk assessment into a model, or suggest that all of the controls found in the OCC 2011-12 Bulletin need to be implemented for management to have a sound risk assessment process. In particular, given that its intended use is to form a point-in-time view of an institution’s risk and controls, forming an independent model validation function to separately assess the integrity of the risk assessment would seem particularly onerous.
Those questioning if risk assessments should be models are asking the question because they see the criticality of the process; they recognize that getting the risk assessment wrong is problematic. Adding to the complexity, the risk assessment process is data input intensive. However, just because the risk assessment process is a critical, data-intensive component to the AML compliance program, it does not mean it needs to be classified as a model. If the risk assessment process cannot match the critical components of a model definition (as discussed), then it should not be classified as a model or subjected to the full rigor of the OCC 2011-12 Bulletin.
- Supervisory Guidance on Model Risk Management, OCC 2011-12, available at http://www.occ.treas.gov/news-issuances/bulletins/2011/bulletin-2011-12a.pdf.
- See Federal Financial Institutions Examination Council (FFIEC) Bank Secrecy Act (BSA)/Anti-Money Laundering (AML) Examination Manual 2014 at 18-26, available at http://www.ffiec.gov/bsa_aml_infobase/documents/BSA_AML_Man_2014.pdf.
- OCC 2011-12 Supervisory Guidance on Model Risk Management, http://www.occ.treas.gov/news-issuances/bulletins/2011/bulletin-2011-12a.pdf
- For more on the topic of “what is an AML model” please see the previously published article in the September-November 2015 issue of ACAMS Today titled “AML Model Risk Management and Validation: Introduction to Best Practices.”
This is an excellent article. It will be very worthwhile to watch for more articles from this author. Not just well written, content and all explanation was very useful.