Home / AI Policy and Regulation Resources / Comprehensive Schemes

Comprehensive Schemes

Comprehensive Approaches to AI Regulation

The United States does not have a comprehensive federal scheme of AI regulation. While the version of the “Big Beautiful Bill” passed by the House of Representatives would have imposed a ten-year moratorium on state AI regulation, the moratorium was not included in the adopted version of the bill. However, President Trump has signed two executive orders that aim to boost the U.S. AI industry and limit states’ ability to regulate AI. Whether the President can preempt state legislation via executive order is a question likely to play out in the courts in the coming months.

Presently, in the gap created by the absence of comprehensive federal regulation, most state regulations target specific policy areas. However, following Colorado’s lead, states have begun to consider statewide, comprehensive schemes.

The paragraphs below introduce President Trump’s Executive Orders, the Colorado comprehensive approach, and the most influential international comprehensive scheme, the EU AI Act. Both Colorado and the EU regulate AI systems based on the level of risk such systems pose.

Ensuring National Policy Framework for Artificial Intelligence (Executive Order)

The Ensuring a National Policy Framework for Artificial Intelligence Executive Order, signed December 11, 2025, establishes a federal preemption strategy using multiple enforcement mechanisms to override state AI laws. The order directs the Attorney General to establish an AI Litigation Task Force within 30 days to challenge state laws on constitutional grounds, including regulation of interstate commerce and conflict with existing federal regulations. It also requires the Secretary of Commerce to publish within 90 days an evaluation identifying state laws that (1) conflict with federal policy; (2) should be referred to the Litigation Task Force; or (3) require AI models to alter truthful outputs or compel disclosure on asserted First Amendment grounds.

The order also employs fiscal pressure to secure non-enforcement of existing state laws through two channels: (1) directing the Secretary of Commerce to make states with identified problematic laws ineligible for non-deployment funds under the Broadband Equity Access and Deployment (BEAD) Program, and (2) directing federal agencies to consider conditioning discretionary grants on states either refraining from enacting conflicting AI laws or entering binding non-enforcement agreements during grant performance periods.

Complementing these immediate enforcement tools, (1) the FCC Chairman must initiate a proceeding to determine whether to adopt federal reporting and disclosure standards that preempt state requirements, (2) the FTC Chairman must issue a policy statement explaining circumstances under which state laws mandating output alterations are preempted by federal prohibitions on deceptive practices under 15 U.S.C. 45, and (3) the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must jointly prepare legislative recommendations for a uniform federal framework preempting conflicting state laws with explicit exemptions for child safety, data center infrastructure, and state government procurement and use of AI.

Launching the Genesis Mission (Executive Order)

The Genesis Mission Executive Order, signed November 24, 2025, establishes a federal artificial intelligence initiative centered on infrastructure provision rather than regulatory mandates by creating the “American Science and Security Platform”—an integrated system consolidating the Department of Energy’s national laboratories, federal supercomputing resources, and government datasets to accelerate AI research in biotechnology, nuclear energy, semiconductors, and quantum information science. The order assigns implementation to the Secretary of Energy, who must establish computing resources, AI frameworks, and secure data access. The order sets forth various timelines, including requiring the Secretary of Energy to identify at least 20 priority challenges within 60 days and demonstrate initial capability of the Platform within 270 days. The Secretary of Energy must also develop frameworks for public-private partnerships, including research agreements and intellectual property arrangements. The order contains no preemption clause affecting state AI laws, creates no private right of action, and is subject to Congressional appropriations.

Colorado’s Approach

The Colorado Artificial Intelligence Act (CAIA), which will become effective June 30, 2026, establishes a comprehensive, a risk-based framework that focuses on “high-risk artificial intelligence systems”—defined as any AI system that makes or is a substantial factor in making “consequential decisions” with material legal or similar effects in areas such as employment, healthcare, insurance, housing, financial services, and education. The Act creates a general duty of reasonable care for both developers and deployers of high-risk systems to protect consumers from “algorithmic discrimination,” defined as unlawful differential treatment or impact based on protected characteristics such as race, gender, age, or disability. Developers must provide detailed documentation to deployers describing reasonably foreseeable uses, known limitations, and risks of algorithmic discrimination, while deployers must implement risk management policies, conduct annual impact assessments, notify consumers before AI makes consequential decisions, provide opportunities to correct inaccurate data, and allow appeals with human review when technically feasible. Deployers must report algorithmic discrimination to the Colorado Attorney General within 90 days of discovery, though small deployers with fewer than 50 employees who do not train AI with their own data are exempt from many requirements, and violations are enforced exclusively by the Attorney General with penalties up to $20,000 per violation.

European Union’s Approach

The European Union AI Act (Regulation (EU) 2024/1689) establishes a risk-based regulatory system applying across all 27 EU member states. The Act categorizes AI systems into four risk levels: (a) unacceptable risk (prohibited practices including social scoring, real-time biometric identification in public spaces, and cognitive behavioral manipulation), (b) high-risk systems (subject to strict requirements including registration in an EU database, impact assessments, human oversight, and transparency obligations), (c) limited-risk systems (requiring disclosure that users are interacting with AI), and (d) minimal or no risk (largely unregulated).

High-risk AI systems—including those used in critical infrastructure, education, employment, law enforcement, and as safety components of regulated products—must meet stringent requirements on data governance, technical documentation, recordkeeping, transparency, and cybersecurity before being placed on the market and throughout their lifecycle. The Act also establishes dedicated rules for general-purpose AI models, with high-impact models presenting systemic risks subject to additional requirements, including model evaluations, adversarial testing, and incident reporting, and imposes substantial penalties for non-compliance of up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher.

The effectiveness of the Act phases in over time, with full effectiveness coming into place on August 2, 2027.

JurisdictionStatute/OrderLinkEffective Date
CaliforniaAI Definition BillAB 28851/1/25
CaliforniaArtificial Intelligence Training Data Transparency ActAB 20131/1/26
CaliforniaCalifornia AI Transparency ActCal. Gov. Code § Section 22757.1; SB 9421/1/26
ColoradoColorado AI ActSenate Bill 24-2052/1/26
EUEU AI Acthttps://artificialintelligenceact.eu/ai-act-explorer/Staggered
EUEU AI Act Summaryhttps://artificialintelligenceact.eu/high-level-summary/Not Applicable
FederalEnsuring National Policy Framework for Artificial Intelligencehttps://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/12/11/25
FederalLaunching the Genesis Missionhttps://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/11/24/25
TexasTexas Responsible Artificial Intelligence Governance Act (TRAIGA)HB 1491/1/26
UtahUtah Artificial Intelligence Policy ActSB 149; Utah Code § 13-72-301 et seq.5/1/24