Why the 2024 Update Matters More Than Prior MRM Guidance
The Federal Financial Institutions Examination Council's 2024 update to model risk management guidance builds on SR 11-7, the foundational Federal Reserve guidance on model risk management that has governed bank model validation programs since 2011. SR 11-7 established the three-tier model lifecycle framework - model development and implementation, ongoing monitoring, and model validation - that remains the structural basis for examination.
What the 2024 update introduces is specific guidance on AI and machine learning models, which SR 11-7 predated and therefore did not address. The new guidance applies the existing MRM framework to AI models but adds requirements around explainability, bias testing, and data provenance that have no direct equivalent in SR 11-7's technical standards.
The practical challenge is that much of the AI-specific language in the 2024 update is drafted at a level of generality that allows significant examiner discretion. Words like "appropriate explainability," "reasonable bias mitigation," and "adequate data governance" carry different operational meanings depending on the examiner team, the institution's risk profile, and the specific AI application being examined. This article maps the ambiguous provisions to the interpretations that examiners have been applying in early examination cycles.
Model Inventory: The Coverage Question
The 2024 update explicitly extends the model inventory requirement to include AI models used in automated decision-making, including models used for credit scoring, fraud detection, compliance monitoring, customer segmentation, and operational process optimization. This is a broader scope than many banks had been applying, particularly for AI models embedded in vendor-supplied systems.
Examiners are looking at two inventory coverage questions. First: does the model inventory capture all material AI-driven decision processes, including those in vendor platforms? Second: is the materiality determination documented and defensible? Many banks excluded vendor AI models from their inventory on the grounds that they were not "bank models" - that categorization is not accepted under the 2024 guidance, which extends MRM obligations to any model that influences a bank decision regardless of who built it.
For compliance teams building or updating model inventories, the practical implication is a system-by-system assessment of vendor platforms to identify embedded AI components, a determination of which of those components meet the model definition, and documentation of the materiality assessment process that determines tier classification.
Explainability: What "Appropriate" Actually Means in Examination
The 2024 update requires that AI models used in material decisions be explainable to the degree "appropriate for the model's purpose and risk." This formulation is intentionally flexible - the standard for a fraud detection model applied to a real-time payment authorization is different from the standard for a credit risk model used in loan origination.
In examination practice, examiners have been applying an effects-based interpretation: if the model output drives a decision that directly affects a customer (credit approval, transaction blocking, fee assessment), the explainability standard is higher and must include the ability to produce adverse action explanations that meet the requirements of Regulation B and, where applicable, the Fair Credit Reporting Act. For models where the output is used to inform rather than determine a decision, a lower explainability threshold applies, but the bank must document its reasoning for that categorization.
The documentation that examiners have been requesting includes: the explainability methodology used for each material model, the results of explainability testing during model validation, and evidence that the explainability outputs were reviewed by the model validation function rather than simply asserted by the model development team.
Bias Testing: The Fair Lending and ECOA Intersection
The 2024 guidance introduces specific bias testing requirements for AI models used in credit and related decisions, with explicit cross-references to fair lending obligations under the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act. Examiners are treating the bias testing section as an integrated fair lending and MRM examination item, not two separate checks.
The guidance requires banks to document: the protected classes tested for bias; the bias metrics used and the rationale for their selection; the test population and the data source; the threshold at which bias findings trigger model adjustment or additional review; and the remediation actions taken when bias is identified. The last point has been a consistent examination focus: banks that identify bias in testing but cannot show what happened next - whether the model was adjusted, whether a manual review process was implemented, or whether the finding was accepted with documented rationale - receive findings on the monitoring and response process rather than the testing itself.
Data Provenance and Model Documentation Requirements
One of the most operationally demanding sections of the 2024 update addresses data governance for AI model training data. The guidance requires that model documentation include: the source of training data, the date range covered, preprocessing steps applied, known limitations of the training dataset, and how data quality was assessed before use in model development.
For models trained on external data or data purchased from third-party providers, additional provenance documentation is required: the terms under which the data was licensed, any restrictions on model deployment that arise from the data license, and an assessment of whether the training data reflects the bank's actual deployment population. This last requirement has created compliance issues for banks using commercially available training datasets that were assembled from national or international populations that may not represent the bank's regional customer base.
Examiners have been requesting model development documentation as part of the standard model risk examination request package, not just as a special request. Banks that maintain model documentation only informally - in analyst workbooks and email threads rather than a structured model repository - are finding that responding to these requests adds significant unplanned time to examination cycles. As we explain in our article on examination preparation and document management, the quality of document organization determines the trajectory of an examination more than the substance of the controls in many cases.
Third-Party Model Oversight: The New Examination Priority
The 2024 guidance is explicit that model risk management obligations extend to models supplied by third parties when those models are used in material bank decisions. Examiners have been focusing on three questions for third-party AI models: Does the bank have access to sufficient model documentation to perform validation? Has the bank actually performed validation, or has it accepted the vendor's validation documentation as a substitute? And does the vendor contract include rights that allow the bank to fulfill its MRM obligations (including audit rights, access to model methodology, and notification of material changes)?
The contract rights question has proven to be a significant gap at many banks. Vendor contracts for AI platforms were often negotiated before MRM obligations were extended to AI models, and they do not include the access rights that examination now requires. Renegotiating or amending those contracts is a process that requires legal, procurement, and compliance involvement and is unlikely to be completed quickly.
Building an Examination-Ready MRM Program Under the 2024 Guidance
The common thread in examination findings under the 2024 guidance is documentation completeness and process integrity. Banks that have strong AI governance practices but have not translated those practices into structured, reviewable documentation are receiving findings that do not reflect the actual quality of their model risk management.
The minimum documentation set that supports examination under the 2024 guidance includes: a current model inventory with AI models classified and tiered; model development documentation meeting the data provenance requirements; validation workpapers covering explainability and bias testing; a monitoring program with documented performance thresholds; and a model governance policy that has been reviewed and approved in the prior 12 months.
Conclusion
The 2024 FFIEC model risk management update is not a minor revision to SR 11-7. It introduces substantive new requirements around AI explainability, bias testing, data provenance, and third-party model oversight that many banks have not fully incorporated into their MRM programs. The ambiguous language in the guidance gives examiners discretion, and early examination cycles indicate that discretion is being applied consistently on the high end of the interpretive range.
Compliance teams that read the guidance at the clause level, map it to their existing MRM program, and document the gap and remediation process will be better positioned than those that rely on the summary interpretation they received from their model risk vendor or consulting firm.
Paragex parses FFIEC guidance and maps obligations to your compliance control library at the clause level. Request a demo to see gap detection against your existing MRM documentation.