US AI Rules

Executive Summary: Navigating the U.S. AI Regulatory Landscape

Overview: A Sectoral, Principle-Based ApproachUnlike the European Union, the United States does not currently possess a single, omnibus federal statute governing Artificial Intelligence. Instead, the U.S. employs a decentralized, sectoral enforcement model where existing consumer protection, civil rights, and prudential statutes are applied to new technologies. Federal regulators have clarified that there is "no AI exemption" to these laws, meaning algorithmic complexity does not excuse non-compliance with fair lending, anti-discrimination, or safety and soundness obligations,.

The Three Pillars of Compliance: Current AI governance in the U.S. rests on three primary regulatory pillars:

  1. Civil Rights and Fairness: Enforced by agencies like the EEOC and DOJ, focusing on preventing algorithmic bias in hiring and lending,.
  2. Consumer Protection: Enforced by the FTC and CFPB, ensuring transparency and explaining adverse actions (such as loan denials) to consumers,.
  3. Prudential Risk Management: Enforced by banking regulators (FRB, OCC, FDIC) and market regulators (CFTC, FINRA), ensuring financial stability and market integrity.

Key Regulatory Developments

  • The "Joint Statement" Standard: In 2023, the CFPB, DOJ, EEOC, and FTC issued a unified stance confirming they will leverage combined enforcement authorities to hold financial institutions accountable for discriminatory outcomes resulting from automated systems.
  • Automated Valuation Models (AVMs): New final rules mandate that mortgage originators adopt quality control standards for AVMs to ensure confidence in estimates, prevent data manipulation, and comply with nondiscrimination laws,.
  • Electronic Trading Risk: The CFTC has adopted principles-based rules requiring exchanges (DCMs) to implement risk controls to prevent market disruptions caused by algorithmic trading, moving away from previous proposals that required routine access to proprietary source code,.
  • Third-Party Liability: Regulators like the OCC and FINRA emphasize that banks and broker-dealers cannot outsource risk. Institutions remain liable for the compliance of third-party AI vendors and must conduct rigorous due diligence,.

The Role of Technical Standards: In the absence of specific legislation, the NIST AI Risk Management Framework (AI RMF) has emerged as the de facto benchmark for compliance. While voluntary, adherence to its four core functions—Govern, Map, Measure, and Manage—allows organizations to demonstrate reasonable due diligence to regulators,. NIST has also released a profile specifically addressing the unique risks of Generative AI.

Emerging Federal vs. State Tensions: A significant shift in the landscape is underway following a December 2025 Executive Order. This directive aims to establish a national framework that preempts a "patchwork" of state regulations. The policy explicitly seeks to challenge state laws that require entities to embed "ideological bias" or produce "non-truthful" outputs in AI models, creating a potential conflict between federal deregulation goals and state-level algorithmic fairness mandates,.

No items found.
December 13, 2025