Trump administration AI blueprint urges federal preemption of state rules, light-touch national standards

WASHINGTON — Colorado spent years crafting one of the nation’s first sweeping artificial intelligence laws, targeting “high-risk” algorithms used in hiring, lending and housing. On Friday, the Trump administration effectively asked Congress to put measures like that on ice.

In a four-page document released by the White House, titled A National Policy Framework for Artificial Intelligence: Legislative Recommendations, the administration laid out a federal blueprint that would preempt many state AI regulations, reject a new AI super-regulator and back a lighter set of national standards centered on child safety, free speech and economic growth.

“The federal government must establish a clear, minimally burdensome national standard for artificial intelligence,” the framework states, warning that a “patchwork of 50 different state regulatory regimes” could slow innovation and weaken U.S. competitiveness.

The recommendations are not binding, and any federal AI law would have to clear a closely divided Congress in an election year. But the document marks the clearest bid yet by a U.S. administration to shift control of AI governance from statehouses to Washington while locking in a market-friendly approach favored by major technology firms.

Blueprint calls for preemption — with carve-outs

The framework, released March 20, urges lawmakers to write a federal statute that overrides “state AI laws that impose undue burdens on interstate commerce” and on AI development, which the document describes as “inherently national and international in character.”

At the same time, it says any preemption should not extend to what it calls states’ “traditional police powers.” The White House recommends that state and local governments retain authority to enforce general laws aimed at protecting children, preventing fraud, and safeguarding consumers, along with their existing control over zoning decisions and rules governing how state agencies themselves deploy AI.

States “should not unduly burden lawful activity merely because it involves AI,” the framework says. It also asks Congress to shield AI developers from being held liable under state law for unlawful acts by third parties using their models.

The document does not spell out in detail what federal obligations would replace stricter state rules in areas such as discrimination or algorithmic transparency. That gap is already drawing scrutiny from civil rights advocates and state officials who have led early efforts to regulate AI.

No new AI regulator, reliance on existing agencies

A central element of the plan is what it rejects: creating a new federal AI watchdog. The White House says Congress should rely instead on existing sector regulators — such as the Federal Trade Commission, Food and Drug Administration and Securities and Exchange Commission — along with industry-led technical standards.

The framework calls for “regulatory sandboxes” to allow controlled experimentation with AI applications, and for federal datasets to be made “AI-ready” so companies and researchers can use them more easily to train and test systems.

Michael Kratsios, director of the White House Office of Science and Technology Policy, has argued in recent interviews that the approach will “unleash American ingenuity to win the global AI race” while addressing targeted harms such as child exploitation and scams.

The administration’s stance contrasts with the European Union’s AI Act, which creates a dedicated, risk-tiered regulatory regime, and with the Biden administration’s earlier “Blueprint for an AI Bill of Rights,” which emphasized protections against algorithmic discrimination and surveillance but did not propose preemption of state law.

Child safety and deepfakes at the forefront

The document opens with a section on “Protecting Children and Empowering Parents,” reflecting the political salience of youth mental health and online harms.

The White House urges Congress to build on the 2025 TAKE IT DOWN Act, a federal law championed by First Lady Melania Trump that created tools to remove nonconsensual intimate images and some deepfake pornography. The new framework says AI services “must take measures to protect children” and recommends:

  • “Commercially reasonable, privacy-protective age-assurance requirements” for AI platforms likely to be accessed by minors.
  • Robust parental controls over children’s privacy settings, screen time and content exposure.
  • Product features designed to reduce risks of sexual exploitation and self-harm when minors use AI tools.

At the same time, the document warns against “ambiguous standards” about what content is permissible and against “open-ended liability” that it says could invite excessive litigation against platforms.

The framework explicitly recommends that federal AI law “should not preempt” states from enforcing generally applicable statutes that ban AI-generated child sexual abuse material or other child-protection laws.

Backing AI firms on training data and replicas

On intellectual property, the administration takes a position that aligns closely with large AI developers currently facing lawsuits from authors, artists and media companies.

“The Administration believes that training of AI models on copyrighted material does not violate copyright laws,” the framework declares, while acknowledging that rightsholders and some legal experts disagree. It urges Congress not to legislate on whether such training is “fair use” until federal courts resolve a series of ongoing cases.

Instead, the White House recommends that lawmakers explore collective licensing or other mechanisms that would allow creators to negotiate compensation with AI firms without running afoul of antitrust rules, without taking a position on when licensing should be required.

The document also calls for a federal framework governing AI-generated “replicas” of a person’s voice, likeness or other identifiable attributes. It suggests granting individuals some control over commercial use of their digital doubles, while preserving exceptions for parody, satire, news reporting and other speech protected by the First Amendment.

The administration’s stance puts it at odds with a draft bill introduced this week by Sen. Marsha Blackburn, R-Tenn., the “TRUMP AMERICA AI Act,” which would impose stricter requirements on AI companies that train on copyrighted works. Blackburn has said her proposal is aimed at protecting “children, creators, conservatives and communities.”

Free speech provisions target government ‘censorship’

Another section of the White House framework focuses on how government interacts with AI providers on content moderation, reflecting long-running conservative concerns about alleged political bias by technology platforms.

The document urges Congress to prohibit federal agencies from coercing or pressuring AI systems and online services to “ban, compel, or alter content based on partisan or ideological agendas.” It calls for new mechanisms allowing Americans to seek redress if officials attempt to influence AI outputs or online speech in ways that courts later find unconstitutional.

The recommendations do not restrict what private companies may choose to moderate on their own, but legal scholars say the language could make agencies more cautious about voluntary cooperation with platforms to counter election interference, health misinformation or other threats.

Civil liberties groups have long pushed to limit unconstitutional government jawboning of platforms, but some organizations are also pressing for closer public-interest engagement with AI firms to address discrimination and safety risks.

Energy, scams and workforce issues

Beyond governance and civil liberties, the framework touches on economic and infrastructure concerns that have become more visible as AI models grow more powerful — and power-hungry.

The White House asks Congress to codify a “Ratepayer Protection Pledge” it has secured from major AI and cloud providers, under which companies promise to shoulder the costs of new data center power demands so residential customers do not see higher electricity bills because of AI.

It also recommends streamlining federal permitting so data centers can more quickly build or contract “behind-the-meter” power generation, which the document argues could ease pressure on the broader grid and support reliability.

On community safety, the framework calls for stronger law enforcement tools to combat AI-enabled impersonation scams and fraud, particularly those targeting seniors and other vulnerable groups. It also urges more investment in technical capacity at national security agencies to assess the capabilities and risks of frontier AI models.

Workforce recommendations focus on integrating AI skills into existing education, apprenticeship and job training programs, as well as funding research on how AI will reallocate specific job tasks. The framework does not propose new federal income supports such as wage insurance or universal basic income.

Long-running battle with states

The new legislative recommendations build on an aggressive stance the administration took toward state AI laws in an executive order signed in December.

That order, “Ensuring a National Policy Framework for Artificial Intelligence,” created a Department of Justice task force to challenge state AI statutes deemed inconsistent with federal policy and directed the Commerce Department to compile a list of “onerous” state laws by March 2026. It also suggested using federal broadband and infrastructure funding as leverage with states that adopt conflicting rules.

Legal experts across the political spectrum have noted that an executive order cannot on its own preempt state statutes. By asking Congress to write preemption into law, the White House is now seeking the legislative authority it previously tried to assert through executive action.

Governors and attorneys general from both parties have signaled resistance. California Gov. Gavin Newsom, a Democrat, has described Trump’s AI orders as “an attack on state leadership and basic safeguards,” while Florida Gov. Ron DeSantis, a Republican, has argued that state “AI bills of rights” remain on solid legal ground until Congress clearly says otherwise.

A bipartisan group of 36 state attorneys general and hundreds of state legislators have already warned that sweeping preemption could undercut consumer protection and civil rights enforcement around AI.

Uncertain prospects in Congress

Congress has discussed national AI legislation for years without agreeing on a comprehensive bill. Efforts to attach a 10-year moratorium on new state AI laws to a 2025 omnibus package and to the 2026 defense policy bill both failed after widespread opposition, including a 99-1 procedural vote in the Senate to strip the preemption language.

Key Republicans, including Senate Majority Leader John Thune of South Dakota and Sen. Ted Cruz of Texas, who chairs the Commerce Committee, have backed the idea of a single national framework but also voiced concern about trampling states’ rights. Democrats have pushed for stronger guardrails on algorithmic discrimination, transparency and workplace surveillance than the White House plan contemplates.

With narrow Republican majorities and a packed agenda that includes other administration priorities such as election legislation, analysts say the odds of enacting a sweeping AI law this year are uncertain. Still, the framework is expected to shape negotiations over narrower measures on children’s online safety, deepfakes and data privacy.

For now, the release of the blueprint clarifies the administration’s opening position in a fight that will determine who writes the rules for an increasingly powerful technology — federal agencies in Washington, state lawmakers in capitals like Denver and Sacramento, or judges interpreting existing law case by case.

It also sharpens a broader question: as artificial intelligence systems move deeper into hiring, housing, education and health care, will the United States lean on a single, industry-friendly national standard, or preserve a patchwork of state experiments that often go further to protect consumers and civil rights? That answer will help determine not only where data centers are built and which books train chatbots, but how Americans are protected — or exposed — when algorithms make decisions about their lives.

Tags: #ai, #regulation, #congress, #statesrights, #childsafety