Toolkit Wednesday, May 7 2025 10:36 GMT

Part 1: Introduction to AI Governance

This is Part 1 of our toolkit series on AI Governance for Africa. It introduces AI governance principles and approaches, and outlines international frameworks, with case studies from the European Union, the United States, and China.


While there is no universal definition of Artificial intelligence (AI), UNESCO’s Recommendation on the Ethics of AI describes AI systems as those “…which have the capacity to process data and information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control.” [1]

These technologies have permeated our everyday. They are no longer confined to use by big tech or billionaires; ordinary individuals have AI in their pockets – and they are using it.

This growing prevalence of AI systems has accelerated the need for AI governance frameworks. The 2022 public release of ChatGPT – OpenAI’s chatbot which is capable of generating novel content in response to prompts – is a prominent milestone in the public story of of AI,[2] with its release sparking a rapid growth in products and applications which make generative AI more accessible to ordinary people, and which have arguably made the uses and risks of AI technologies more visible in the public imagination.

Outside of large language models (LLMs) like ChatGPT, some everyday examples of the use of AI systems include rideshare apps, online banking systems, and e-commerce. Governments the world over are using AI for a wide range of activities including to build smart city platforms, bolster policing systems, and for service delivery, among others.

As AI systems become increasingly embedded in everyday life, they raise important questions about regulation, ethics, and their potential impact on human rights. These questions find form in debates about the governance of AI – how do we provide protection without stifling innovation? How can the law keep pace with the evolving nature of AI? Should AI be governed internationally or domestically?

Despite the complexity of AI governance, it is clearly a global concern. Countries are at different phases of resolving these questions and have implemented a range of governance instruments in response to concerns.

This toolkit series is a continuation and update of work the Thomson Reuters Foundation first undertook in 2023. It unpacks the context of AI governance, in Africa and globally, and considers advocacy approaches for future governance, with a particular focus on the Southern African context. It does so in the following ways:

  • This part, Part 1, gives an introduction to AI governance principles and approaches, and outlines international frameworks, with case studies from the European Union (EU), the United States, and China. In doing so it considers governance trends and important considerations included in governance instruments.
  • Part 2 examines existing and emerging AI governance instruments in Southern Africa – in particular, South Africa, Zambia, and Zimbabwe. More broadly, it also outlines continental responses and details existing governing measures in Africa.
  • Part 3 explores a series of key questions for the design of advocacy strategies on AI governance, particularly in African contexts.

Navigate sections

Download Part 1

Download pdf: Download Part 1

Explore the rest of the toolkit

Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY
Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY