Toolkit Wednesday, May 7 2025 10:36 GMT

Emerging AI governance approaches

The previous sections looked at the underlying challenges which drive the need for AI governance, and some of the social and political factors that shape how different countries approach the policy questions. But what forms can AI governance actually take? This section explores different types of AI governance.


As in many areas of technology regulation, the regulation of AI faces what is sometimes called the “Collingridge dilemma”: if one creates regulation before the impacts of a specific type of technology are clear, that regulation may not function as intended (for example, it may stifle innovation without fully achieving its goals); but by the time its impacts are clear, the technology may be too entrenched to regulate effectively – or at least, its regulation will become more challenging over time, as the technology is already embedded in everyday life.[1] We have seen such dynamics at play in efforts to regulate social media over the last fifteen years. The challenge is even more acute in relation to AI, as the technology is advancing rapidly, and it is difficult to predict what capabilities will emerge from these systems over the next decade.

In recent years, countries across the world have been responding to this regulatory challenge in different ways. As regulation is rapidly evolving in this area, it can be useful to think about this by reference to the different types of regulatory efforts currently underway – only a small portion of which are binding.

There are a range of dimensions we can use to categorise these efforts:

  • Area of focus: This relates to the specific aspects of an AI technology that the regulation seeks to address, such as:
    • Specific rights or harms: Regulation may seek to address particular AI risks, such as algorithmic bias or discrimination, or enforce particular principles or rights, such as transparency, oversight and accountability. As we will see, many regulations try to cut across these categories.
    • Innovation or development: Certain regulations may seek to create an enabling environment for AI development in their jurisdiction, such as by establishing institutions or funding mechanism to advance AI policy, research, and training.
  • Relevant parties: Regulatory instruments on AI can confer rights, duties, or liability on different parties involved in the development, distribution, or use of AI technologies. These could include:
    • The provider of the technology (the person or entity who developed the AI or put it on the market);
    • The deployer (a person or entity that is using AI technology developed by another party for some kind of official or non-personal use – such as a business or government);
    • An end user or subject (a person who uses, or is subject to, an AI system developed and deployed by others);
    • AI governance frameworks may impose different responsibilities on government actors and private industry.
  • Sectoral scope: Although some AI regulatory efforts may attempt to provide a comprehensive or universal framework (such as the EU’s AI Act, discussed below), others may target specific sectors or uses, such as setting standards for government uses of AI (such a recent law enacted in the US state of New Hampshire), or setting rules for the use of AI in targeted contexts (such as recent laws enacted in the state of California, which regulate specific uses of AI in the entertainment industry and healthcare).
  • Nature of framework: The different governance frameworks in this area can be thought of on a spectrum from least to most binding, often reflecting how developed (or under-developed) the regulatory effort is in any given jurisdiction:
  • In terms of least binding instruments, we have discussion documents or “white papers”, and voluntary commitments made by relevant stakeholders (notably AI companies) – which accordingly are unenforceable.
  • Next are guidelines, declarations of principle, and related soft law instruments. These are often made by stakeholders who do not necessarily have regulatory enforcement powers but can set the parameters for future regulation.
  • Further along are national AI strategies (which typically set out steps a country will follow to achieve its goals relating to AI, often including actions to expand development and research, and to produce policy and regulation) and national AI policies (which typically establish the principles which would guide a country’s approach on AI).
  • National or regional AI legislation would be more binding, but the field is sparse: the first comprehensive AI law – the EU’s AI Act – was only enacted in 2024, a few months before this toolkit was drafted.[2]

In the following sections, we unpack more information about the emerging regulatory approaches in three major jurisdictions: the United States, China, and the European Union.


References

[1] More information on the Collingridge Dilemma is available here.

[2] European Commission “AI Act enters into force” (August 2024).

Navigate sections

Download Part 1

Download pdf: Download Part 1

Explore the rest of the toolkit

Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY
Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY