• Home
  • 5
  • Article
  • 5
  • Navigating the New Frontier: Freddie Mac’s Evolved AI/ML Guidance for Mortgage Companies

Navigating the New Frontier: Freddie Mac’s Evolved AI/ML Guidance for Mortgage Companies

Feb 16, 2026

Artificial intelligence (AI) and machine learning (ML) are no longer emerging tools in mortgage operations. They are already embedded across origination, underwriting, servicing, fraud detection, and customer engagement, and their influence continues to expand. As that reliance grows, so does scrutiny around how these tools are governed, monitored, and controlled. 

Freddie Mac’s updated guidance, issued under Bulletin 2025-16 and effective March 3, 2026, reflects this reality. The update signals a clear shift away from high-level policy acknowledgment and toward a structured, risk-based approach to AI governance that operates continuously, not episodically. 

This change mirrors what many mortgage companies are already seeing more broadly across financial services regulation: expectations are moving beyond annual reviews and attestations and toward demonstrable oversight, documented risk management, and alignment with recognized governance and security standards. 

From General Compliance to Granular Governance 

Historically, Freddie Mac’s expectations for Seller/Servicers using AI/ML centered on a relatively straightforward set of requirements, including: 

  • Compliance with applicable laws and regulations 
  • Senior management-approved AI policies 
  • Communication of policies to relevant personnel 
  • Annual policy reviews 
  • Disclosure of AI usage upon request 

Those expectations have not gone away. What has changed is the depth and specificity of what Freddie Mac now expects organizations to demonstrate. The updated framework places far greater emphasis on operational controls, defined accountability, ongoing monitoring, and independent validation. 

In practice, this means organizations must be prepared to demonstrate not just that policies exist, but that AI-related risks are actively identified, assessed, and managed throughout the full lifecycle of each system. 

Key Changes in the New Framework

1. Proactive Risk Management Over Simple Disclosure

Mortgage companies are now expected to move well beyond identifying where AI is used. The focus is on understanding and managing the risks associated with each model or tool. Freddie Mac’s guidance makes clear that organizations should be actively addressing risks such as: 

  • Data poisoning, where compromised or manipulated data influences model behavior 
  • Adversarial attacks designed to produce incorrect or misleading outputs 
  • Model drift, where performance degrades as underlying data patterns change 
  • Bias and fairness risks, particularly in underwriting, pricing, and servicing decisions 

This requires formal risk assessments tied to individual AI use cases, documented mitigation strategies, and ongoing reviews that reflect how these systems actually operate over time.

2. Trustworthy AI Embedded into System Design

Rather than treating trust, fairness, and transparency as after-the-fact considerations, Freddie Mac’s updated guidance emphasizes embedding these principles directly into AI system design. 

Consistent with NIST’s definition, trustworthy AI should be: 

  • Valid and reliable 
  • Secure and resilient 
  • Transparent and explainable 
  • Accountable, with clear ownership 
  • Privacy-preserving 
  • Free from unfair bias 

For mortgage companies, this translates into practical expectations: being able to explain how models influence decisions, clearly document data sources and training methodologies, and demonstrate that controls exist to protect consumer data and mitigate discriminatory outcomes. 

3. Continuous Monitoring and Oversight

Annual policy reviews are no longer sufficient on their own. Organizations are expected to maintain ongoing oversight of AI systems, including: 

  • Performance monitoring and accuracy validation 
  • Security monitoring for unauthorized access or manipulation 
  • Data quality and integrity controls 
  • Regular bias testing and documented remediation 
  • Incident response procedures tailored to AI-related failures or attacks 

This shift effectively pulls AI oversight into existing enterprise risk management, cybersecurity, and compliance monitoring programs rather than treating it as a standalone exercise.

4. Mandatory Alignment with Industry Standards

Freddie Mac now explicitly calls for alignment with recognized global cybersecurity and information security frameworks, including: 

  • NIST 800-53 for security and privacy controls 
  • ISO 27001 for information security management systems 

This elevates AI governance expectations to the same level as broader enterprise security and compliance programs, requiring formal documentation, implemented controls, risk assessments, and ongoing auditability.

5. Clear Accountability and Segregation of Duties

To strengthen governance and reduce conflicts of interest, organizations are expected to establish clear separation between: 

  • AI development teams 
  • Model validation and testing functions 
  • Risk management and compliance oversight 
  • Audit and independent review 

Ownership of AI systems, associated risks, and compliance obligations must be formally documented. No single function should control the entire AI lifecycle without independent challenge.

6. Formal Internal and External Audits

Freddie Mac now expects both internal and independent external audits to be a standard component of AI governance. These audits should evaluate: 

  • Compliance with internal policies and regulatory expectations 
  • Effectiveness of risk controls and monitoring activities 
  • Alignment with NIST and ISO standards 
  • Quality and consistency of governance documentation 

Audit frequency should be driven by risk level, system criticality, and the organization’s defined risk tolerance. 

Strategic Action Plan to Meet the March 3, 2026 Deadline 

1. Establish an AI Governance Committee 

Organizations should form a cross-functional governance body that includes leadership from IT, cybersecurity, risk management, legal, compliance, and business operations. This group should be responsible for: 

  • Approving AI policies and risk frameworks 
  • Reviewing risk assessments and monitoring results 
  • Overseeing remediation efforts 
  • Ensuring ongoing regulatory alignment

2. Perform a Comprehensive AI/ML Inventory and Risk Assessment

All AI systems across origination, underwriting, fraud detection, servicing, marketing, and customer support should be documented. For each system: 

  • Define purpose and business impact 
  • Identify data sources and model dependencies 
  • Assess risks related to security, bias, compliance, and reliability 
  • Assign risk ratings and accountable owners 

This inventory becomes the foundation for sustainable governance.

3. Update Governance Policies and Controls

Governance documentation should be enhanced to address: 

  • Trustworthy AI principles 
  • Defined AI risk tolerance and escalation thresholds 
  • Lifecycle management requirements, from design through retirement 
  • Documentation and approval standards 
  • Explicit mapping to NIST 800-53 and ISO 27001 

A formal gap analysis can help identify control deficiencies and prioritize remediation.

4. Implement Continuous Monitoring Capabilities

Organizations should establish processes and tooling to: 

  • Track model performance and drift 
  • Monitor data integrity 
  • Detect security threats 
  • Conduct recurring bias and fairness testing 
  • Log AI-related incidents and remediation activities 

Results should be reviewed regularly by governance leadership.

5. Formalize Accountability and Segregation of Duties

Clear ownership should be assigned for each AI system and its associated risks. Development, validation, operations, and audit responsibilities should be distinctly separated and formally documented to support strong internal controls.

6. Plan and Execute Regular Audits

An audit schedule should be developed that includes: 

  • Internal reviews aligned to risk levels 
  • Independent third-party audits for higher-impact AI systems 
  • Formal reporting of findings and corrective actions

7. Conduct Training and Awareness Programs

Relevant stakeholders should understand: 

  • Freddie Mac’s updated expectations 
  • Core AI risk and governance concepts 
  • Their specific responsibilities within the AI framework 

Training should extend beyond technical teams to include leadership, compliance, and business users. 

What This Means For Mortgage Companies 

Freddie Mac’s updated AI/ML guidance represents a meaningful shift in expectations for mortgage companies. Compliance is no longer defined by policy documentation and periodic review cycles alone. Organizations are now expected to operate a living, risk-based AI governance program grounded in continuous monitoring, defined accountability, formal controls, and alignment with established security standards. 

Mortgage companies that begin building these capabilities now will be better positioned not only for Freddie Mac compliance by March 2026, but also for stronger operational resilience, improved data protection, and greater confidence in AI-driven decision-making. 

Richey May’s experts help lenders and servicers align AI governance with Freddie Mac expectations, cybersecurity standards, and real-world operational risk. Contact our Mortgage Banking and/or Cyber team to set up a conversation! 

Explore More Insights

Some of these items predate Richey May’s restructuring to an alternative practice structure. Richey May is no longer a CPA firm. All Attest services are provided by Richey, May & Co., LLP.

Our Latest Insights

Looking for more industry expertise and to stay up to date? Check out more from the experts at Richey May below: