Institute for Social Vision Design
Insights

AI Regulation in the United States: Federal vs. State — Can a Unified Framework Be Achieved?

Federal preemption and independent state regulations are on a collision course in U.S. AI policy. This article examines the legislative frameworks of California, Colorado, and Texas, and unpacks the structural challenges of AI governance.

ISVD Editorial Team

What Is Happening

CaliforniaFrontier model regulation

TFAIA (compute threshold) + AI Transparency Act (watermark obligation)

TexasUse-based regulation

TRAIGA (prohibition of self-harm promotion, discrimination, CSAM, etc.)

ColoradoAnti-discrimination

SB 24-205 (impact assessment for high-risk AI)

The federal government seeks to preempt this "patchwork" for unification, but risks shutting down states' role as "laboratories of democracy."

AI regulation approaches of three US states — Different design philosophies based on technical specs, use cases, and social impact

An unprecedented tension has emerged between the federal government and the states over AI regulation in the United States.

President Trump issued an executive order directing the establishment of a federal AI policy framework. This framework is designed to potentially invalidate state laws that conflict with federal policy. He further ordered the Attorney General to create an AI litigation task force, building a structure to challenge state laws on grounds of unconstitutional regulation of interstate commerce and federal preemption. This represents the first serious attempt by the federal government to explicitly restrict state regulatory authority over AI.

Meanwhile, individual states have been rapidly advancing their own regulations. California has enacted two laws. The Transparency and Fairness in AI Act (TFAIA, effective January 2026) requires developers of frontier models using computational resources exceeding 10^26 FLOPS to publish risk frameworks and implement safety measures. The AI Transparency Act (SB 942, effective August 2026) mandates watermarking of AI-generated content and the provision of detection tools.

Texas's Responsible AI Governance Act (TRAIGA, HB 149, signed June 2025) prohibits AI systems used for "restricted purposes," including encouraging self-harm, unlawful discrimination, infringement of constitutional rights, and generation of child sexual abuse material (CSAM). Colorado's SB 24-205 aims to prevent algorithmic discrimination by high-risk AI systems. Originally scheduled for enforcement in February 2026, it has been postponed to June 30.

Notably, there are explicit exceptions to federal preemption. State laws concerning child safety, AI compute and data center infrastructure, and state governments' own AI procurement and use are excluded from federal preemption. Furthermore, even in cases where federal and state jurisdiction is contested, state laws remain enforceable until a court ruling is issued.

Background and Context

The tension between federal and state regulatory authority in the United States is not unique to AI. It is a structural pattern that has recurred whenever new social challenges emerge — from environmental regulation to financial regulation to data privacy.

A classic example is California's vehicle emission standards. California set its own standards stricter than the federal EPA baseline, and other states split between following California or adhering to federal standards. A similar dynamic is now reproducing itself in AI regulation. California imposes the most stringent requirements, Texas introduces use-specific regulations, and Colorado focuses on anti-discrimination measures. Each state designs its regulatory approach based on a distinct philosophy.

This "patchwork" creates problems from both innovation and citizen protection perspectives. For businesses, compliance costs multiply with each additional state-level regime. For citizens, the level of protection varies depending on their state of residence — an asymmetry that raises fundamental equity concerns.

The federal government's pursuit of a unified AI framework is driven in part by industry demands to avoid such practical confusion. However, exercising federal preemption also risks suppressing states' experimental regulations — what Justice Brandeis famously called "laboratories of democracy." Frontier model regulations like California's TFAIA are still in the early stages of discussion at the federal level. Preemption could sever the pathway by which states generate insights that inform federal policymaking.

The international context also warrants attention. The EU AI Act was adopted in 2024, applying a risk-based comprehensive regulatory framework in stages. It established a systematic structure including conformity assessments for high-risk AI systems, transparency obligations for general-purpose AI models, and a list of prohibited AI applications. In Japan, AI-related legislation (AI関連法) took full effect in September 2025, though it does not include penalty provisions.

The U.S. debate differs from both the EU's comprehensive regulation and Japan's soft-law approach without penalties. Instead, it is navigating regulatory design within the distinctive institutional structure of federal–state power distribution. The very framing of the question — "which level of government should regulate?" — is itself a defining feature of American AI governance.

Structural Reading / Seeds of Social Vision

The essence of this conflict lies not in the methodology of technology regulation but in the design principles of governance. Three structural questions intersect.

First, there is the mismatch between the pace of regulation and the pace of innovation. AI technology evolves far faster than legislative processes can keep up with. The 10^26 FLOPS threshold set by California's TFAIA is calibrated to today's frontier models. However, advances in computational efficiency and algorithmic improvement may enable equivalent capabilities with far fewer resources. Regulation anchored to fixed thresholds carries an inherent risk of obsolescence as technology evolves.

The "use-based" approach adopted by Texas's TRAIGA — prohibiting specific harmful applications — offers one response to this problem. By focusing on how technology is used rather than its specifications, this design aims for greater resilience to technological change. However, defining the boundaries of "restricted purposes" ultimately remains a political judgment.

The second question concerns the trade-off between promoting innovation and protecting citizens. The federal framework seeks to reduce corporate compliance costs and accelerate innovation through regulatory unification. But if unification means convergence toward the least restrictive standard, the level of citizen protection may decline relative to states' independent regulations.

Colorado's SB 24-205 and its attempt to prevent algorithmic discrimination is instructive here. It mandates impact assessments for high-risk AI systems and requires corrective action when discriminatory outcomes occur. If such protections are nullified by federal preemption, the people most affected by AI — those subjected to algorithmic decision-making in hiring, lending, and criminal justice — stand to lose the most.

At an even deeper level lies the question of democratic legitimacy: who gets to make the rules? The construction of a regulatory framework through executive order bypasses the congressional legislative process. State laws, by contrast, have passed through state legislatures and been signed by governors — they are products of democratic procedure. If the basis for federal preemption rests solely on an executive order, its legal stability may prove fragile. The provision allowing state laws to remain enforceable until a court ruling is a reflection of this instability.

The explicit carving out of child safety from federal preemption offers one instructive precedent. A design that preserves state regulatory authority for specific protected interests suggests the possibility of dividing federal–state roles not as "all or nothing" but by domain. A multi-layered governance model — selecting the most appropriate regulatory level for each type of AI risk, whether discrimination, privacy, safety, or transparency — is theoretically compelling, but its implementation complexity is formidable.

Japan's enactment of AI-related legislation without penalties can be seen as an experiment in governance through soft law. Yet the question of how to ensure effectiveness remains. The federal-versus-state battle in the United States is one phase in a pendulum swinging between over-regulation and under-regulation, and its outcome will shape not only domestic policy but the direction of global AI governance.

Remaining Questions

The speed at which technology transforms society and the speed at which society governs technology — how great a gap between these two can we tolerate? The federal–state conflict is merely the American manifestation of this question. No country yet possesses a complete answer.


References

Frontier Artificial Intelligence Models Act (TFAIA) — SB 1047

California State Legislature. California Legislative Information

Read source

Texas Responsible AI Governance Act (TRAIGA) — HB 149

Texas Legislature. Texas Legislature Online

Read source

Regulation (EU) 2024/1689 — Artificial Intelligence Act

European Parliament and Council. EUR-Lex

Read source

Related Consulting & Support

AI Adoption & DX Assessment

Free Initial Consultation

Evaluating organizational digital maturity and exploring AI/DX advancement directions together.

Join ISVD's activities?

Sign up to receive the latest research and activity reports. Feel free to reach out about collaboration or project participation.

ISVD Editorial Team