States Are Stepping Up: What Escalating Enforcement Means for Lenders

As federal enforcement has slowed, states aren’t just filling the void — they’re redefining it.
Attorney General Andrea Joy Campbell of Massachusetts recently made that clear with a $2.5 million settlement against Earnest, a student and personal loan provider, for fair lending violations tied to artificial intelligence-driven underwriting and flawed consumer disclosures.
Massachusetts isn’t the only state getting involved. State-level fair lending enforcement is beginning to heat up and the message is clear: regulators at the state level are getting equipped and empowered to root out discrimination in lending. For lenders, especially those relying on AI models, this wave serves as a wake-up call to stay proactive, actively mitigate emerging risks and continuously adhere to best practices to remain compliant.
AI Usage Can Lead to Compliance Risk
From 2014 to 2020, Earnest allegedly failed to maintain adequate fair lending controls, leading to discriminatory outcomes across both AI-driven and manual underwriting. The violations included untested algorithms that produced disparate impacts, undocumented manual overrides, problematic “knockout” rules that automatically eliminate individuals from loan considerations based on specified inputs and misleading adverse action notices. While the activities in question stopped years ago, Massachusetts is holding lenders accountable through a governance overhaul and ongoing compliance reporting, sending a clear message that outdated and potentially discriminatory practices — whether they involve AI or not — are still subject to enforcement.
What makes this case stand out isn’t just the scope of the violations — it’s that they were preventable. Leveraging AI and automation can undoubtedly streamline workflows, but regulators expect more than efficiency. They expect transparency, fairness and control.
If your institution can’t explain how a model makes decisions, can’t prove it’s been tested for bias or can’t govern when and how exceptions are made, you’re creating a perfect storm for compliance risk.

Enforcement Has Gone Local
While the Massachusetts enforcement action is notable, it’s not the only fair lending action we’ve seen from state regulators recently.
- Discrimination and redlining: New Jersey took action against a cash advance company after executives were reportedly caught on tape directing employees to avoid working with certain racial and ethnic groups — a stark example of overt discrimination. At the same time, the state’s Attorney General is actively investigating redlining by traditional lenders, using demographic lending patterns to identify and address potential disparities.
- Unfair and deceptive practices: New York is actively investigating small business and consumer lenders for biased underwriting, deceptive disclosures and predatory practices — actions that have already led to sizable settlements and public consent orders. Adding to the pressure, the state’s FAIR Business Practices Act imposes new, lender-specific consumer protection requirements that raise the compliance bar even higher.
- AI underwriting: States, including California, Oregon and Texas, are zeroing in on AI-driven underwriting models under consumer protection and anti-discrimination laws.
The common thread among state regulators is the growing concern that algorithmic decision-making, often presented as objective and efficient, can quietly replicate or even exacerbate existing biases, especially when human overrides occur without proper oversight.
What the Updates Mean for Lenders
State-level enforcement adds complexity to compliance efforts. While federal agencies tend to offer detailed guidance and consistent expectations, state actions can differ widely. For example, one state might focus on AI governance, while another targets data disclosures or pricing practices.
For risk, compliance and legal teams, this means:
- Model governance isn’t optional. Every algorithm used in decision-making must be explainable, testable and monitored for disparate impact. If you can’t show that your model is fair, regulators may assume it isn’t.
- Discretion must be governed. Human overrides of automated decisions need written policies, documented justifications and regular review. Discretion without discipline is a growing liability.
- Adverse action notices are high-stakes. These aren’t routine disclosures — they’re a compliance catalyst. Vague or inaccurate notices can erode consumer rights and invite regulatory scrutiny.
- Historical blind spots carry current risk. Today’s fair lending standards are being applied to lenders’ past practices. Institutions must be willing to audit their past and rectify any issues that are identified.
For lenders operating across state lines, the challenge is even greater. The patchwork of expectations means fair lending risk management must be scalable, nuanced and responsive to multiple regulatory regimes simultaneously.
Looking Ahead: From a Reactive to Proactive Fair Lending Strategy
The days of treating fair lending as a box-checking exercise are over. Enforcement is becoming increasingly sophisticated, targeted and localized. State attorneys general are asking harder questions about how decisions are made, how data is used and who gets left out. And they’re not waiting for future violations — they’re reviewing the past with new scrutiny and broader tools.
These heightened expectations are a pivotal moment. Lenders can continue to view fair lending as a compliance obligation — or they can treat it as a strategic priority. Those who choose the latter will build systems that are resilient, transparent and aligned with a more expansive definition of fairness. Those who choose the former may find themselves explaining not just today’s decisions, but yesterday’s.
The shift is already underway. The question is whether your institution is adapting or falling behind.
Content provided by Ncontracts