Toolkit Wednesday, May 7 2025 10:36 GMT

AI governance in the United States

The previous section looked at AI governance approaches in Europe. Now let’s turn to the situation in the United States.


In contrast to the EU’s comprehensive and centralised approach, the United States’ approach has been more piecemeal, sector-specific, and distributed across various federal agencies,[1] although US state legislatures produced a rush of non-federal AI regulation in 2024.

The federal government has primarily led with non-binding interventions and sector-specific guidelines which fall within the powers of the executive. For example, the US Copyright Office has issued rulings that suggest that most text, images, and videos created by AI systems cannot be copyrighted as original works.

The absence of binding federal regulation could be read as either a policy orientation, or a function of the polarised and partisan make-up of the US congress, which has a poor record on tech regulation (the US still has no federal data protection law, for example) and which was historically unproductive in 2023.[2] There have been proposals for an Algorithmic Accountability Act, which would require companies to evaluate the bias and effectiveness of their AI systems, and would require the country’s Federal Trade Commission to enforce this requirement,[3] but at the time this toolkit was produced there were no signs of progress.

As we will explore below, in the absence of binding federal action, many US states have undertaken their own policymaking at state level.

Administrative actions

  • Executive order: In October 2023 US President Joe Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,[4] which mandated a range of federal agencies to develop standards for safe and ethical design and use of AI in their various sectors, and which placed new reporting requirements for companies developing AI with national security implications to share their testing data with the US government. Although these actions were largely directed at internal processes of the federal government, within nine months agencies had undertaken over a hundred actions or policy processes in response.[5] This suggests more meaningful progress than that of a 2019 Executive Order from the Trump administration (“Maintaining American Leadership in Artificial Intelligence”) which also mandated various federal agencies to develop plans to regulate AI applications; by December 2022, only one of the 41 major agencies (the Department of Health and Human Services) had meaningfully created such a plan.[6]
  • Blueprint for AI Bill of Rights: Prior to its Executive Order, the Biden administration also issued the 2022 ‘Blueprint for an AI Bill of Rights’,[7] a non-binding document that sets out five principles[8] and associated practices to guide the development and use of AI. It tasks different federal agencies with implementation in their respective policy sectors (like health, labour, and education). The Biden administration also secured voluntary commitments from major AI developers in the US to meet certain standards for testing and transparency of their systems; by mid-2024 16 companies had signed on to these commitments, including Amazon, Anthropic, Apple, Google, Inflection, Meta, Microsoft, and OpenAI.

The re-election of Donald Trump to the US Presidency in 2024 may result in these administrative actions being rolled back, and federal regulation on AI seems unlikely in the foreseeable future. In the absence of binding policy at the federal level, many state legislatures across the United States have introduced, and in some cases enacted, state laws relating to AI.

California state law

In particular, in 2024 the state of California enacted a suite of 17 laws addressing various AI-related concerns.[9] Although the Governor vetoed the most far-reaching AI bill produced by California lawmakers (see below) the remaining bills comprise the most substantive AI regulation in US state law, both because of their content, and because of California’s outsized regulatory influence as the biggest state economy in the US, and as the home state of many of the world’s leading AI developers.

  • Vetoed bill on AI safety: California’s governor vetoed the most prominent AI bill sent by state legislators, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, in September 2024. If it had been enacted, the Act would have put stringent new requirements for developers to prevent their technologies from causing serious harm, including imposing severe fines and civil liability for failing to comply with mandatory reporting and safety procedures.[10]
  • Other California AI bills: The governor did sign into law 17 other bills addressing a raft of other issues in AI, most of which come into force in 2025 and 2026.[11] Their provisions include:
    • Expansions of the state’s data privacy law to ensure they extend to personal information used in the training or operation of AI tools;
    • Stronger protections for actors and other entertainers against AI-generated replicas of their voice or likeness;
    • Requirements for major generative AI systems to make AI-generated content easier to identify, through ‘watermarking’, providing tools to verify if content was created or modified with AI, and other measures;
    • Minimum transparency requirements for generative AI systems to disclose high-level information about the data used to train them;
    • A requirement for health providers to provider a disclosure and disclaimer when using generative AI to communicate clinical information to patients, and to provide clear instructions for how to contact a human healthcare provider;
    • Expansions of the state’s laws on child sexual abuse material (CSAM) and non-consensual intimate image (CSII) to ensure their penalties extend to media generated or modified with AI.

Colorado state law

In May 2024, the state of Colorado enacted Senate Bill 24-205 (the Colorado Artificial Intelligence Act), which approaches AI safety through the lens of consumer protection, and which applies to anyone doing business in the state. A summary of its provisions is as follows:[12]

  • The Act principally applies to “high risk” AI systems, defined as those whose outputs result in a “consequential decision” – meaning a decision which materially affects a person’s access to education or employment opportunities, financial services, housing, healthcare, or essential government services – or the cost thereof.
  • It creates obligations on both developers of AI technologies (those who design the technology), and deployers (those who use the technology for a commercial purpose);
  • Developers of such models need to take proactive measures in their design process to address risks of algorithmic discrimination resulting from their technology, and to disclose information about its functioning, testing, and safeguards against discrimination, and high level information about its training data;
  • Deployers must take similar steps to document and disclose such details, and most report any incidents of discriminatory outcomes to the state Attorney General;
  • Where such an AI system makes or contributes to a consequential decision that is adverse to a consumer, the deployer must provide the consumer with reasons for the decision, including an explanation of how the AI system contributed to the decision, and what data was used to reach it. The deployer must also give the consumer an opportunity to appeal the decision and get a human review.

Other US state laws

According to the National Conference of State Legislatures, as of September 2024 at least 44 other US states introduced AI bills in 2024, and at least 30 of those states enacted one or more AI laws.[13] Of these:

  • At least eight regulate the use of AI in the context of elections or political advertising,[14] for example by requiring a disclosure for political advertising that uses AI-generated content, or by making it an offence to distribute deceptive AI-generated media (such as ‘deepfakes’) to influence an election;
  • At least four expand the scope of existing laws on child sexual abuse material (CSAM) and non-consensual intimate image (CSII) to include media generated or modified with AI;[15]
  • At least four created some prohibition on certain uses of AI by government bodies.[16] For example, New Hampshire’s HB 1688 prohibits state agencies from using AI for real-time biometric surveillance, except with a warrant, or from categorising people by behaviour or class if this results in unlawful discrimination; Utah passed criminal justice amendments which restrict the court’ reliance on algorithmic assessments in probation decisions;
  • At least seven states passed laws or resolutions which do not directly regulate the design or use of AI, but serve primarily to establish an advisory body or task force to develop recommendations or policies to guide the state’s approach on artificial intelligence.[17]

As of September 2024, more than 200 other AI bills were listed as ‘pending’ in state legislatures.[18]


References

[1] A Engler “The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment” Brookings Institute (April 2023).

[2] C Hunt “Is this the least productive congress ever?” The Conversation (August 2024).

[3] K Piper “There are two factions working to prevent AI dangers. Here’s why they’re deeply divided” Vox (August 2022).

[4] Executive Office of the President “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (October 2023).

[5] Executive Office of the President “FACT SHEET: Biden-⁠Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI” (July 2024).

[6] See A Engler “The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment” Brookings Institute (April 2023).

[7] Executive Office of the President “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (October 2022).

[8] The principles include: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. See this White House briefing note.

[9] Office of the Governor of California “Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians” (September 2024).

[10] See Anderson and others “Raft of California AI Legislation Adds to Growing Patchwork of US Regulation” whitecase.com (October 2024).

[11] Ibid.

[12] See K Anderson “The Colorado AI Act: What you need to know” bakertilley.com (September 2024).

[13] See database published by NCSL “Summary: Artificial Intelligence 2024 Legislation” (September 2024).

[14] Alabama, Colorado, Florida, Florida, New Hampshire, New Mexico, Oregon, Utah, and Wisconsin.

[15] Alabama, Florida, North Carolina, and South Dakota.

[16] Maryland, New Hampshire, Utah, and New York (pending enactment by the governor).

[17] Delaware, Florida, Indiana, Massachusetts, Pennsylvania, Tennessee, and West Virginia.

[18] NCSL “Summary: Artificial Intelligence 2024 Legislation” (September 2024).

Navigate sections

Download Part 1

Download pdf: Download Part 1

Explore the rest of the toolkit

Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY
Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY