AI Governance for Africa > Part 1 > Section 9
Previous sections of this toolkit explored some of the risks associated with artificial intelligence, and some of the legal and policy frameworks that try to avoid those risks. A common requirement in many of these frameworks is to undertake an AI impact assessment. This section provides a bit more information about these assessments.
What is an AI impact assessment?
An impact assessment is a process to identify and mitigate potential harms that could result from a particular project, tool, or policy. Outside of the context of AI, for example, environmental impact assessments are often a legal requirement before undertaking a major construction project; more recently, data protection impact assessments have become a common requirement for the design of policies, technology, or practices, which may involve the use of people’s person information.
In the context of AI, impact assessments are widely held as an important process to mitigate the potential harms which could result from a particular AI technology. They are generally focused on human rights implications, but could include environmental or other concerns as well.
Depending on the regulation and risk, an impact assessment could be undertaken at different stages of the process – and more than once. For example, it may be conducted before or during the design and development of the technology (ex-ante), or after it is completed (ex-post). It may be a requirement of the developers of the technology (to test and refine their design), or of the deployers of the technology (to test and refine their use of the technology) – or both.
If the assessment identifies a potential harm that outweighs any benefit that comes from the technology, it could result in the technology or use being re-designed, or even discontinued.
Importantly, impact assessments are necessarily grounded in the context of the technology or use that is being assessed, which means there is not a single format for an AI impact assessment.
Need an AI impact assessment?
For a compilation of AI impact assessment resources, visit the ALT Advisory website.
AI impact assessments feature prominently as a safety procedure in guidelines and standards for AI governance. The two preeminent AI laws enacted in 2024, the EU AI Act and Colorado’s AI Act, both include requirements for developers and/or deployers to conduct AI Impact Assessments for ‘high risk’ systems.
Explore the rest of the toolkit
Part 2: Emerging AI Governance in Africa
Part 2 examines existing and emerging AI governance instruments in Southern Africa – in particular, South Africa, Zambia, and Zimbabwe. More broadly, it also outlines continental responses and details existing governing measures in Africa.
Part 3: Advocacy Strategies for AI Governance
Part 3 explores a series of key questions for the design of advocacy strategies on AI governance, particularly in African contexts.