AI Governance for Africa > Part 2 > Section 2
As is the case under international law, the regional position on AI governance is scattered across various instruments. This section unpacks a few of the key instruments, and the key principles they have set down for AI governance in Africa.
The adoption of the African Union Continental Artificial Intelligence Strategy, discussed below, is a step in the right direction and will ideally galvanise more states to grapple with the complex dynamics of AI governance.
In relation to individual states, the 2023 Government AI Readiness Index ranked Mauritius, South Africa, Rwanda, Senegal, and Benin as the most AI-ready in the region.[1] The Index assesses readiness using 39 indicators across three pillars: government, the technology sector, and data and infrastructure.
In relation to the number of organisations or institutions working on AI innovation in the region, data by the Centre of Intellectual Property and Information Technology Law (CIPIT) estimates this figure to exceed 2,400. The countries leading on this front are South Africa, Egypt, and Morocco.[2] This figure is likely to continue increasing year-on-year.
While AI readiness across Africa is comparatively low, regional initiatives in the form of strategies, policies, and reports are, on a joint reading, alive to the benefits and risks of this technology in the African context. Mindful of the rapid developments in this space, there is a need to accelerate law and policy reform to mitigate potential risks to fundamental rights.
AU Continental Artificial Intelligence Strategy
In June 2024, the AU endorsed the Continental Artificial Intelligence Strategy, described as an African-centric strategy for the development and adaptation of AI in the African context. The strategy flags five themes which fall under the banner of AI governance being: maximising the benefits of AI, building capabilities for AI, minimising AI risk, exploring African public and private sector investment in AI, and exploring regional and international cooperation and partnership. It further prioritises certain sectors for potential developmental uses of AI: agriculture, the health sector, education, and climate change adaptation.
The strategy offers further guidance to African states to develop effective and robust approaches to AI governance:
- Implementing and updating laws: States must ensure they have enacted and implemented laws (and if necessary, updated these laws to address AI-related harms) in the following areas: Intellectual property; Electronic communications and transactions; Whistleblowing and protected disclosures; Access to information; Data protection; Cybersecurity; Consumer protection; Antitrust and competition; Laws and policies for the inclusion and empowerment of different groups such as women, children, and persons with disabilities.
- Identify regulatory gaps: The second consideration is that government should identify other regulatory gaps with respect to AI and ensure that the rule of law is upheld. Gaps may include labour rights for gig and platform workers and regulations applicable to social media and content generators.
- National AI strategies and policies: Government should establish enabling national AI strategies and policies that align with broader development goals and in consultation with public and private sector experts. In establishing these policies, states should also identify specific sectors where AI can make a positive contribution.
- Independent reviews and assessments: In order to mitigate the potential harm caused by AI, the strategy calls on countries to make use of independent review processes and impact assessments. It cites UNESCO’s Ethical Impact Assessment as a source to derive best practice principles.[3]
- Continued research: Given AI’s rapid development, states should undertake AI requires continuous research and evaluation to understand, for example, emergent risks of its use in Africa, that AI systems are used in inclusive and sustainable ways, monitoring what best practices emerge from other jurisdictions, and that regulatory sandboxing initiatives are supported.
AU Convention on Cyber Security and Personal Data Protection (The Malabo Convention)
The AU Assembly adopted the Convention on Cyber Security and Personal Data Protection (the Malabo Convention) in 2014, and it finally came into force on 8 June 2023. This is a significant development as the Convention aims to establish a comprehensive legal framework for data protection, electronic commerce, and cybersecurity. Now that it is in force, all 55 AU member states must implement domestic laws that conform to the principles in the Convention.[4]
Although the Malabo Convention does not specifically address AI, it provides some useful standards concerning data protection. This is significant in light of the vast amounts of data required to train AI models. There are two notable provisions:
- Article 9 provides that the scope of data protection laws should include ‘automated processing’ within their scope of application. This means that AI systems must comply with data protection laws when they process personal data.
- Article 14.5 provides that a person should not be subject to a consequential decision that is based solely on the automated processing of their personal data. This means that an important decision about a person cannot be made entirely by a machine – there must be some human involvement. Most comprehensive data protection laws in African countries already include similar provisions, but the Convention is a welcome development for countries that do not yet provide such protections.
Sharm El Sheikh Declaration and the AU Working Group on AI
In 2019, AU Ministers in charge of Communication and Information and Communication Technology and Postal Services adopted the Sharm El Sheik Declaration (the Declaration), which acknowledges that digital transformation requires political commitment, the alignment of policies and regulation, and an increase in resources and investment. It further recognises that the AU requires a Digital Transformation Strategy to inform a coordinated response to digital technologies and the Fourth Industrial Revolution (4IR).
Importantly, the Declaration established a Working Group on Artificial Intelligence which is mandated to study the following: “the creation of a common African stance on AI; the development of an Africa wide capacity building framework; and establishment of an AI think tank to assess and recommend projects to collaborate on in line of Agenda 2063 and SDGs.”
The working group is made up of experts from Egypt, Ghana, Kenya, Mali, Algeria, Cameroon, Ethiopia, and Uganda.[5] Egypt was elected as the Chair of the Working Group, Uganda as the Vice Chair, and Djibouti as the Rapporteur. The Working Group has met three times since its formation in 2019.[6] Despite the important role that this Working Group could play, there is limited public information about how it has fulfilled or intends to fulfil, its mandate.
African Commission Resolution 473
In March 2021, the African Commission on Human and Peoples’ Rights (ACHPR) adopted Resolution 473 which concerns AI, robotics, and other new and emerging technologies. The resolution calls on State Parties to ensure that the development and deployment of such technologies are compatible with the rights in the African Charter.[7] Notably, it calls for State Parties to acknowledge these technologies on their agendas and to work towards a comprehensive governance framework.[8] It appeals to State Parties to maintain human control over AI, noting that the requirement should be codified as a human rights principle.[9] The Resolution commits to undertake a study to develop standards to address the challenges posed by such technology.[10] The study is not yet completed.
SMART Africa and the AI for Africa Blueprint
In 2013, seven African Heads of State[11] adopted the SMART Africa Manifesto which aimed to accelerate socio-economic development through the use of ICTs. Importantly, in 2014, the Manifesto was endorsed by all heads of State and Governments of the African Union and now has 53 signatories. The SMART Africa Alliance has been formed to action and monitor compliance with the SMART Africa Manifesto.
The Smart Africa Alliance, together with several partners, developed an AI for Africa Blueprint to “outline the most relevant opportunities and challenges of the development and use of AI for Africa and how to address them”; and “to make concrete policy recommendations to harness the potential and mitigate the risk of AI in African countries.”[12]
The Blueprint provides actionable recommendations to assist states with the implementation of national AI strategies. In doing so, it acknowledges the diversity of African states and accordingly does not propose a single AI policy solution. Instead, it provides guidelines that can be used by states to formulate their own, context-specific policy. The Blueprint details five areas that it recommends should be considered in the formulation of a national policy. These include human capital, AI adoption (from lab to market), networking, infrastructure, and regulation.[13]
The Blueprint recognises the critical need for a robust governance framework to regulate AI, recommending that an adequate legal framework should consider the following elements:[14]
- Legal provisions on copyright, patents, and unfair competition;
- Data protection mechanisms, and mechanisms for data sharing;
- Guidelines on ethical design and procurement;
- Provisions to create an enabling business environment such as incentives, infrastructure, cybersecurity, and clarity on liability issues and licencing.
- Intersectional policy measures to cut across multiple regimes and industries such as financial markets, financial services, health, and other sectors.
The Blueprint acknowledges the difficulty with AI regulation by stating:[15]
Uninformed approaches to governance can lead to systemic biases and overregulation that can and will stifle innovation, thus limiting the opportunities that can be leveraged and further creating an environment for political abuse. At the same time, under-regulation will result in cultivating a culture whereby trust and confidence is absent, with consumers and citizens being left unprotected.
It further notes that the governance of AI will require a combination of hard and soft approaches. Hard approaches refer to the adoption of laws and regulations that it suggests are only necessary in response to a particular concern which cannot be solved through other measures. It recommends that a hard approach be taken for issues concerning copyright and patents, investment and intellectual property, and unfair competition. Soft law refers to substantive expectations that are not enforceable by governments, including guidelines, standards, codes of conduct, and best practice. The Blueprint acknowledges that soft law will likely fill governance gaps while regulatory measures are being developed.
AU Child Online Safety and Empowerment Policy
Given children’s evolving capacities, a child-centric policy on AI is critical. In May 2024, the AU adopted the Child Online Safety and Empowerment Policy with the view of protecting children as they engage online and mitigating the risks associated with the internet. While the Policy does not expressly mention AI, it broadly advocates for states to ensure that ICT policies protect the best interests of the child, take proactive measures to curb discrimination, protect children’s right to life, survival, and development, and enable consultative processes with children.
What are regulatory sandboxes?
A regulatory sandbox is an example of a soft law measure that is often used in response to innovative technologies, including AI. A regulatory sandbox is a framework that allows “start-ups and other innovators to conduct live experiments in a controlled environment under a regulator’s supervision.”[16]
Mauritius has a Sandbox Framework for the Adoption of Innovative Technologies in the Public Service,[17] aiming to help the public sector better understand the challenges, costs, and capabilities of emerging technologies before conducting a formal procurement process.
Regulatory sandboxes have also been used in Ghana, Nigeria, South Africa, Zimbabwe, and Rwanda.[18] Kenya’s Communication Authority also recently held consultations on a sandbox framework for emerging technologies.[19]
References
[1] Oxford Insights “Government AI Readiness Index” (2023).
[2] AU Continental Artificial Intelligence Strategy (2024) at page 17.
[3] UNESCO “Ethical impact assessment: a tool of the Recommendation on the Ethics of Artificial Intelligence” (2023).
[4] ALT Advisory “AU’s Malabo Convention set to enter force after nine years” (19 May 2023).
[5] Egypt’s Ministry of Communications and Information Technology Egypt Hosts AU Working Group on AI First Session (6 December 2019).
[6] Egypt’s Ministry of Communications and Information Technology Egypt Chairs AU Working Group on AI (25 February 2021).
[7] African Commission on Human and Peoples Rights, Resolution 473 on the need to undertake a study on human and peoples’ rights and artificial intelligence (AI), robotics and other new and emerging technologies in Africa, 10 March 2021 (Resolution 473), section 1.
[8] Section 4 and 5 of Resolution 473.
[9] Section 6 of Resolution 473.
[10] Section 7 of Resolution 473.
[11] Rwanda, Kenya, Uganda, South Sudan, Mali, Gabon, and Burkina Faso.
[12] SMARTAfrica Artificial Intelligence for Africa Blueprint (2021) page 14 (AI Blueprint).
[13] See Chapter 3 of the AI Blueprint.
[14] AI Blueprint at 41.
[15] AI Blueprint at 41.
[16] AI Blueprint at 42.
[17] Republic of Mauritius Ministry of Public Service, Administrative and Institutional Reforms “Sandbox Framework for Adoption of Innovative Technologies in the Public Service” (March 2021).
[18] See African Observatory on Responsible Artificial Intelligence “Sandboxes in Mauritius” (8 June 2023).
[19] The consultation process has closed, but information about it can be accessed here.
Explore the rest of the toolkit
Part 1: Introduction to AI Governance
Part 1 gives an overview of AI governance principles and approaches, and outlines international frameworks, with case studies from the European Union, the United States, and China. It discusses common concerns and themes driving AI governance.
Part 3: Advocacy Strategies for AI Governance
Part 3 explores a series of key questions for the design of advocacy strategies on AI governance, particularly in African contexts.