Toolkit Wednesday, May 7 2025 10:36 GMT

Why we need AI governance

This section explores some of the most prominent themes in AI risk, which underpin the need for governance measures.


Despite the great diversity of AI technologies, and their uses and contexts, there are a range of common concerns and risks associated with AI that underpin the need for governance. (The AI Risk Repository has identified and categorised more than 700 AI risks documented in academic research.)[1] This section explores some of the most prominent themes in AI risk.[2]

Discrimination and bias

There is significant concern of algorithmic bias, where AI systems reproduce biases in their design or in the data they are trained, leading to discriminatory or unfair outcomes (often based on race, gender, or other sensitive characteristics). For example:

  • Certain AI recruitment tools used to screen job applicants have been found to favour men over women,[3] or younger applicants over older ones.[4]
  • Banks in the United States which use a particular software to assess people applying for a home loan were found to be 80% more likely to reject black applicants than white applicants with a similar financial status.[5]

These biases are thought to be the result of the data used to train the AI systems. For example, if an AI recruitment software analysed all the hiring decisions previously made by human recruiters (who may have been more likely to give jobs to male candidates, or younger candidates), the AI system may replicate those patterns without meaning to. From a rights perspective, algorithmic bias undermines the right to equality and non-discrimination, which is protected under the International Bill of Human Rights.[6]

Lack of transparency or explainability

There are a broad range of concerns relating to lack of transparency in how AI systems are designed and trained, and the lack of explainability (sometimes called interpretability) in how these systems arrive at a particular decision or outcome. This compounds the discrimination risk described above. The decisions or output of an AI system can have far-reaching consequences (for example, a bank’s assessment of a person’s creditworthiness or a recruiter’s assessment of whether to give a person a job interview), but it may be impossible to know how this decision was made, and whether the decision was fair or correct.

Privacy and data protection concerns

There are several dimensions to AI privacy concerns, such as:

  • ‘Design-side’ concerns, relating to how AI systems are ‘trained’ on vast amounts of information, which may include storage and analysis of people’s personal information without their knowledge or informed consent.
  • ‘Use-side’ concerns, relating to how AI systems could be used to harm people’s privacy rights, for example by inadvertently leaking personal information which are stored by an AI system, or by AI-driven analysis of personal information, or by the exploitation of security vulnerabilities in an AI system.
  • AI is at the heart of invasive surveillance technology, such as facial recognition or bulk analysis of communications data.

Since the right to privacy is internationally recognised, governance instruments should include principles and mechanisms protecting this right. This should include effective recourse channels for instances where private information is compromised.

Misinformation and disinformation

There are broad concerns at the capacity for AI systems to produce or spread false or misleading information, potentially on a grand scale, such as:

  • The tendency for generative AI systems such as Chat GPT to confidently present incorrect information as fact (sometimes called ‘hallucination’).[7]
  • The capacity for malicious actors to use AI to generate fake but realistic-looking images, video, or other media, to stoke false beliefs about the world – for example by depicting a politician doing something outrageous.[8]
  • The risk of content-ranking and content-moderation algorithms on social media or search engine platforms enabling the spread of false or harmful content to a wider audience, or to artificially suppress valid information or opinions.[9]

Misinformation and disinformation may contribute to the violation of several rights including electoral rights, the right to equality and non-discrimination, and freedom of expression, among others.

Loss of human autonomy

These concerns broadly relate to broader social impacts from the rollout of AI systems, beyond the working of any specific technology, such as:

  • Social and economic inequalities and job losses as human workers are replaced by increasingly sophisticated AI systems.
  • Over-dependence on AI decision-making, where human operators accept the decisions or outputs of AI systems uncritically, or voluntarily give up decision-making powers to those systems because of a conscious or unconscious sense that the AI systems are infallible.

Environmental harms

There is a growing appreciation that advanced AI systems have a significant carbon footprint, as they require a huge amount of computing power and server infrastructure, as well as electricity usage (to power the infrastructure) and water (to prevent it from overheating). For example, between 2020 and 2023, the carbon emissions of major technology companies such as Microsoft, Google, and Meta, increased by 40 percent to 65 percent due to their investments in AI.[10] This has raised concerns that AI development is an obstacle to global responses to the climate crisis, and consequently, has a bearing on the enjoyment of the right to a healthy environment.

Geopolitical power imbalances

The fact that the majority of AI technology is developed and owned in the global North has raised concerns about asymmetric development in AI. The attendant risks include:

  • Perpetuating global inequalities and excluding global South communities from potential social and economic benefits of AI technologies.
  • Leaving global South communities, especially more marginalised groups, more vulnerable of the attendant risks of AI technologies, such as algorithmic bias and exclusion.

Existential AI risks

Perhaps most severely, there are genuine concerns that without sufficient guardrails, AI technologies could develop capabilities to bring about mass harm. For example:

  • Certain AI technologies have already been found to be capable of using deception or manipulation to carry out the task they were designed for,[11] pointing to the need for strong ethical programming to ensure AI are guided by human values, laws, and goals.
  • Many AI developers and researchers predict that future AI technology may become self-aware, or start to pursue their own goals or interests in opposition to those of humankind.

The aim of AI governance efforts is to create harmonising frameworks that ensure AI technologies are developed in ways which address these risks while harnessing the potential benefits. Importantly, these governance efforts also aim to ensure similar standards apply to each developer and in each jurisdiction, so that everyone follows the same set of rules.

Principles of best practice

As demonstrated in this toolkit series, AI governance is scattered across various instruments at the international, regional, and even domestic levels. While there is no singular source for best practice principles, there is significant overlap across this range of documents, perhaps best summed up by the UNESCO Recommendation on the Ethics of Artificial Intelligence, which outlines principles which should underscore any design, use, or output of AI:  

  • Proportionality and protection against harm
  • Fairness and non-discrimination
  • Safety and security
  • Sustainability
  • Privacy and data protection
  • Human oversight and determination
  • Transparency and explainability
  • Responsibility and accountability

Navigate sections

Download Part 1

Download pdf: Download Part 1

References

[1] P Slattery and others “A systematic evidence review and common frame of reference for the risks from artificial intelligence” AI Risk Repository (2024).

[2] This section draws on some of the themes outlined in the AI Risk Repository.

[3] J Dastin “Amazon scraps secret AI recruiting tool that showed bias against women” Reuters (October 2018).

[4] Equal Employment Opportunity Commission “EEOC Sues iTutorGroup for Age Discrimination” (May 2022).

[5] E Martinez and L Kirchner “The secret bias hidden in mortgage-approval algorithms” Associated Press (August 2021).

[6] The International Bill of Human Rights refers to the Universal Declaration of Human Rights (1948) (Accessible here), the International Covenant on Civil and Political Rights (1976) (Accessible here), and the International Covenant on Economic, Social and Cultural Rights (1966)

[7] W Zhao and others “Wild Hallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries” [Preprint] (2024).

[8] M Adami “How AI-generated disinformation might impact this year’s elections” Reuters Institute for the Study of Journalism (March 2024).

[9] See for example M Elswah “Does AI Understand Arabic? Evaluating The Politics Behind the Algorithmic Arabic Content Moderation” Carr Center for Human Rights Policy (2023).

[10] G Noble and F Berry “Power-hungry AI is driving a surge in tech giant carbon emissions” The Conversation (July 2024).

[11] P Park and others “AI deception: A survey of examples, risks, and potential solutions” (2024) 5 Patterns.

Explore the rest of the toolkit

Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY
Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY