Overview: A Sectoral, Principle-Based ApproachUnlike the European Union, the United States does not currently possess a single, omnibus federal statute governing Artificial Intelligence. Instead, the U.S. employs a decentralized, sectoral enforcement model where existing consumer protection, civil rights, and prudential statutes are applied to new technologies. Federal regulators have clarified that there is "no AI exemption" to these laws, meaning algorithmic complexity does not excuse non-compliance with fair lending, anti-discrimination, or safety and soundness obligations,.
The Three Pillars of Compliance: Current AI governance in the U.S. rests on three primary regulatory pillars:
Key Regulatory Developments
The Role of Technical Standards: In the absence of specific legislation, the NIST AI Risk Management Framework (AI RMF) has emerged as the de facto benchmark for compliance. While voluntary, adherence to its four core functions—Govern, Map, Measure, and Manage—allows organizations to demonstrate reasonable due diligence to regulators,. NIST has also released a profile specifically addressing the unique risks of Generative AI.
Emerging Federal vs. State Tensions: A significant shift in the landscape is underway following a December 2025 Executive Order. This directive aims to establish a national framework that preempts a "patchwork" of state regulations. The policy explicitly seeks to challenge state laws that require entities to embed "ideological bias" or produce "non-truthful" outputs in AI models, creating a potential conflict between federal deregulation goals and state-level algorithmic fairness mandates,.