Fair Lending and AI Explainability:
What Banks Need to Understand
Banks and credit unions are increasingly using AI and complex algorithmic models to support credit decisions, including loan approvals, credit line adjustments, and underwriting. These tools can process large volumes of data quickly and identify patterns that traditional models may miss. The challenge is that when an AI model drives or informs a credit decision, the institution may find it difficult to explain specifically why that decision was made.
That explanation gap has direct implications for fair lending. Federal law has long required creditors to tell applicants the specific reasons for denying credit or changing terms. AI may not change that obligation.
What Is the Explainability Problem in AI-Assisted Lending?
What Federal Guidance Says About AI and Adverse Action
The Equal Credit Opportunity Act (ECOA) and its implementing regulation, Regulation B, set requirements for creditors that take adverse action on a credit application or an existing account. When an adverse action occurs, the creditor must provide the applicant with a statement identifying the specific principal reasons for that action.
In September 2023, the Consumer Financial Protection Bureau (CFPB) issued guidance addressing how those requirements apply when creditors use AI and complex algorithmic models. The CFPB stated that creditors cannot satisfy their adverse action notice obligations by using generic sample forms or checklists if those forms do not accurately reflect the actual reasons for the decision.¹ The guidance further described that model complexity does not reduce or eliminate those obligations. The guidance described that ECOA and Regulation B apply to technology-driven decisions in the same way they apply to other credit decisions." Flag for your decision. ¹ In connection with the guidance, the CFPB director noted that there is no special exemption for artificial intelligence.
Why This Is a Leadership Problem, Not Just a Technical One
When an AI-assisted lending decision produces an outcome the institution cannot explain, the first question regulators and complainants ask is not about the model. It is about who approved the tool, what review was done before it was deployed, and what the institution knew and when.
A bank that cannot answer those questions is not just facing a technology gap. It is facing a governance gap. The model may have been performing exactly as designed. The problem is that nobody in a position of authority asked the right questions before the tool went live, and nobody had a clear process for catching issues before they surfaced as formal complaints.
This is a pattern regulators are watching. The Financial Stability Board has noted that monitoring AI-related vulnerabilities in financial institutions remains at an early stage, and that lack of transparency and the evolving nature of AI systems make those vulnerabilities particularly difficult to track from the outside.² Institutions that cannot readily answer those questions may find that the challenge is not limited to the model itself. The review and approval process for the tool, and the oversight structure around it, may also come into focus.
What a Governance Committee Is Positioned to Do
The NIST AI Risk Management Framework describes governance as the function that establishes accountability, assigns authority, and creates the conditions for responsible AI use across an organization.³ In a lending context, that translates to a practical set of questions a governance committee could ask before an AI-assisted tool is approved for use in a credit decision workflow.
Has a fair lending risk assessment been completed for this tool?
Can the model produce specific reasons for adverse actions that meet the standard described in CFPB guidance?
Is there a documented human checkpoint between the AI output and the decision communicated to the applicant?
Who owns ongoing monitoring of this tool once it is deployed?
Those questions do not require the committee to understand the model's technical architecture. They require the committee to exist, to have authority, and to have asked before something went wrong. An institution with a functioning governance structure may be better positioned to respond when questions arise about how an AI-assisted decision was made and who was responsible for overseeing the tool.
Sources
Microsoft, 2026 Data Security Index - https://info.microsoft.com/ww-landing-data-sec-index-2026.html?lcid=en-us
Financial Stability Board, Monitoring Adoption of AI in the Financial Sector, October 2025 - https://www.fsb.org/2025/10/monitoring-adoption-of-ai-in-the-financial-sector
NIST, AI Risk Management Framework 1.0 - https://www.nist.gov/itl/ai-risk-management-framework
ABA Banking Journal, Are We Sleepwalking Into an Agentic AI Crisis? December 2025 - https://bankingjournal.aba.com/2025/12/are-we-sleepwalking-into-an-agentic-ai-crisis
Related pages in this series:
For a plain language overview of how the NIST AI RMF applies to banks and credit unions, see NIST AI RMF for Banks and Credit Unions
For a detailed statutory overview of Texas HB 149, see Texas HB 149 and Financial Institutions
For context on federal AI strategy and its connection to state-level governance requirements, see Federal Strategy to State Law
For an overview of the shadow AI visibility problem in banking, see Shadow AI in Banking.
This page is for informational purposes only. It provides a general factual overview of publicly available laws, regulatory guidance, and frameworks. It does not constitute legal advice, regulatory interpretation, compliance guidance, or a recommendation of any specific course of action. Laws and guidance referenced here may be subject to change. Qualified legal and compliance professionals can help organizations assess their specific circumstances and obligations.

