AI Governance for Africa > Part 1 > Section 4
Countries across the world are scrambling to regulate AI. To understand the regulatory approaches being adopted, this section briefly looks at the social and political context which influences how countries approach AI governance.
There are several factors currently influencing the creation of international AI governance.
- Asymmetric AI development: It costs hundreds of millions of dollars to train cutting-edge AI models, due to the immense amounts of computing power, data, and other raw materials required,[1] which limits who can create these models. At present AI development is overwhelming driven by private industry rather than academic or government institutions,[2] with the vast majority of advanced AI models coming from labs in the United States (like OpenAI, Google, Meta, and Anthropic); labs in China are in distant second place, with the United Kingdom and the EU in third place.[3] This concentration gives disproportionate influence to the countries in which major AI labs reside, as these labs are bound first by national regulation. It also suggests that the AI labs themselves have significant influence as, through self-regulation and other industry measures, they may act as de-facto regulators in this space, influencing the behaviour of the rest of the world.
- Global power dynamics: AI systems have clear benefits to national interests, including enhanced military capabilities, boosted economic advantage, and even enhancing a state’s control over its people, which may be particularly appealing in authoritarian contexts. Thus, some states may aim to advance this technology as quickly as possible, and to prioritise their own national interests, rather than to support collaborative, globally harmonised regulatory frameworks.
- Pre-existing regulatory cultures: The process by which policies are made, as well as the institutional arrangements supporting different policies, vary across the world in line with the differing legal traditions and social priorities of different countries. The EU, for example, is known for its more precautionary approach and its comprehensive regulatory frameworks, such as the General Data Protection Regulation (GDPR), while the United States’ approach is typically more sector-specific, federal, and tilted in favour of innovation instead of caution. As we shall see below, these pre-existing cultural differences represent different starting points from which states craft their AI regulation.[4]
- Waves of public interest: Since the release of ChatGPT – and its associated ‘hype cycle’ – there has been considerable appetite across the world for more comprehensive AI governance. Most of the CEOs of the major AI companies have supported calls for some kind of regulation, although the precise form they believe it should take remains ambiguous.[5]
- Role of civil society and advocacy groups: Efforts from international civil society and related groups have taken two broad themes – those primarily concerned with AI risks related to misinformation, prejudice, copyright (“AI ethics”),[6] as well as potential human rights violations, and those concerned with AI risks related to catastrophic harm (“AI safety”).[7] These groups diverge in their beliefs about what is most important, although they overlap considerably in the steps they believe need to be taken to reduce risk from AI (for example, in their calls for AI models to be transparent, accountable, and subject to third-party safety audits). Both groups wield considerable influence, with the former being more influential among human rights and traditional civil society groups, and the latter carrying more weight amongst technical researchers and the AI labs themselves. The interests and arguments of these groups thus exert considerable influence on the regulatory environment.
References
[1] W Knight “OpenAI’s CEO Says the Age of Giant AI Models Is Already Over” Wired (17 April 2023).
[2] Stanford University Institute for Human-Centered AI “The AI Index 2024 Annual Report” (2024) at 58.
[3] Ibid, at 61.
[4] A Engler “The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment” Brookings Institute (April 2023).
[5] See for example C Rozen “Regulate AI? Here’s What That Might Mean in the US” The Washington Post (27 July 2023).
[6] This is typified by the November 2021 UNESCO “Recommendation on the Ethics of Artificial Intelligence”.
[7] A list of scientists, academics, policymakers, industry professionals, and other notable figures who hold this view can be found here.
Explore the rest of the toolkit
Part 2: Emerging AI Governance in Africa
Part 2 examines existing and emerging AI governance instruments in Southern Africa – in particular, South Africa, Zambia, and Zimbabwe. More broadly, it also outlines continental responses and details existing governing measures in Africa.

Part 3: Advocacy Strategies for AI Governance
Part 3 explores a series of key questions for the design of advocacy strategies on AI governance, particularly in African contexts.
