Toolkit Wednesday, May 7 2025 10:36 GMT

AI governance in South Africa

This section expands on the state of AI governance in Southern Africa by focusing on a few case studies in the region — starting with South Africa.


Of the three Southern African countries profiled in this toolkit, South Africa has made the most significant progress on AI governance. There are, however, some gaps in terms of policy and policy implementation.

At a broader level, the right to privacy is protected in section 14 of the Constitution.[1] Article 14(d) safeguards everyone’s right not to have the privacy of their communications infringed. The Constitutional Court has recognised the importance of the right to privacy by describing it in the following terms,

“The right to privacy accordingly recognises that we all have a right to a sphere of private intimacy and autonomy without interference from the outside community. The right to privacy represents the arena into which society is not entitled to intrude. It includes the right of the individual to make autonomous decisions, particularly in respect of controversial topics.50 It is, of course, a limited sphere.”[2]

As noted elsewhere in this toolkit, the unfettered use of AI can have significant implications for this right. In July 2021, the Protection of Personal Information Act (POPIA) came into effect.[3] This Act gives effect to the constitutional right to privacy and sets out the normative standards on how data controllers must process data. While POPIA does not expressly mention AI, it does address automated decision-making in section 71. This section prohibits the processing of data which is conducted solely on an automated basis and has legal consequences for data subjects.

Further, there is some degree of overlap between the standards reflected in the Act and governance principles for AI. For example, POPIA speaks to accountability, processing limitations, and security safeguards.[4]

In March 2024, South Africa and Zambia endorsed the UN’s non-binding resolution on AI.[5] The resolution, discussed in greater detail in Part 2 of this toolkit, has been co-sponsored by over 120 States and places international law and international human rights at the centre.

In 2022, South Africa established the Artificial Intelligence Institute of South Africa (AIISA).[6] The AIISA was established in response to the Presidential Commission on the Fourth Industrial Revolution (PC4IR). Its primary function, which seems wide, is to generate knowledge and applications that position South Africa in “the global AI space.”

In 2023, the Department of Communications and Digital Technologies (DCDT), together with the AIISA, published a draft AI National Discussion Document.[7] While the status of the Discussion Document is somewhat unclear, it lists several ethical themes that DCDT and the AIISA should consider:[8] First, anti-competitive conduct in AI. Second, risks in robotic or autonomous devices that use AI. Third, aggressive job loss. Fourth, criminal behaviour. Fifth, existential risks if AI technologies “get out of control” and pursue goals that are detrimental to humanity. Lastly, the risk of military-purpose AI falling into the wrong hands and being used in a broader, public context.[9]

In August 2024, South Africa’s Department of Communications and Digital Technology issued a draft National AI Policy Framework for public comment.[10] The policy framework aims to lay the groundwork for the development of a national AI policy, by establishing framing principles and concerns which should guide the final AI policy. The document puts a strong emphasis on AI ethics, stipulating that South Africa’s national AI policies and strategy must protect human rights, promote inclusion, and address the various risks and harms associated with AI.

The framework names nine strategic pillars for South Africa’s AI policy, including: developing the country’s AI talent pool; developing the necessary digital infrastructure; investing in AI research and development; promoting public sector uses of AI; developing standards and guidelines for ethical design and use of AI; ensuring data protection; ensuring safety and security; and promoting AI transparency and explainability. As a high-level document, the framework does not give guidance on how each of these ambitious goals could be implemented.

With respect to relevant developments in case law, there is an ongoing class action against Uber which was announced in 2021.[11] The case seeks to ascertain the employment status of Uber drivers and questions around their compensation. While it is presently unclear how far along the proceedings are, the case appears to be hinged on precedent set by the UK Supreme Court.

As South Africa continues to refine its AI governance, it must do so having considered the input of various stakeholders.

In the next section, we continue the exploration by turning to the state of AI governance in Zambia.


Navigate sections

Download Part 2

Download pdf: Download Part 2

Explore the rest of the toolkit

Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY
Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY