Toolkit Wednesday, May 7 2025 10:36 GMT

AI governance in Europe

Given that regulation is quickly evolving across the world, the following sections looks at the high-level approach being taken by different significant global powers – starting with Europe.


European Union’s AI Act

In 2024, the EU passed the Artificial Intelligence Act (the AI Act), which is the first comprehensive legal framework for the development and use of AI systems. The AI Act takes a risk-based approach, categorising AI systems into several levels of risk, and creating different requirements and obligations for each one in their design or use within the EU.[1]

  • Unacceptable risk: Certain systems are deemed unacceptably risky and are prohibited within the EU. These include systems that employ harmful manipulative ‘subliminal techniques’, systems used by public authorities for social scoring, and real-time biometric surveillance, such as facial recognition.
  • High risk: AI systems are designated as ‘high risk’ if they are used as a safety component in products falling under the EU’s health and safety legislation, or are used in certain specified areas such as education, migration, law enforcement, or the management of infrastructure.[2] These systems must be registered in an EU-wide database managed by the European Commission and must comply with a range of measures related to testing, data governance, transparency, human oversight, and cybersecurity.
  • Limited risk: Systems that interact with humans (such as chatbots), as well as systems that generate audio, visual, and other types of content are designated as ‘limited risk’ and are only subject to basic transparency obligations (such as the requirement to disclose themselves to affected persons).
  • Low or minimal risk: All other AI systems considered to pose low or minimal risk are not bound by any obligations, although the Act envisions the creation of codes of conduct to encourage the AI labs that develop them to voluntarily abide by the measures required of high-risk systems.

The Act provides for the establishment of the European Artificial Intelligence Office (the AI Office) which will oversee implementation and compliance, along with other oversight and advisory structures, and requires all EU member states to establish national authorities to oversee the Act’s operation and enforcement domestically. These authorities would have access to confidential information (such as the source code of the relevant systems), and the power to impose fines for non-compliance.

The Act became law in August 2024, and various provisions will come into force in phases between 2024 and 2026.

During the law-making process, the Act faced opposition from global technology firms and other actors on the basis that it could jeopardise the EU’s competitiveness and technological sovereignty without necessarily solving the relevant AI-related challenges.

Human rights groups, on the other hand, have criticised the AI Act for far-reaching exemptions and exceptions which may enable human rights violations through the use of AI.[3] For example, the Act’s safeguards do not apply to AI systems used for national security[4] and critics argued that the Act’s partial ban on real-time biometric surveillance such as facial recognition should have applied to all forms of biometric surveillance.[5] (It is unclear, for example, that ‘retrospective’ use of AI for biometric surveillance, such as conducting facial recognition on video footage which was recorded last week, is necessarily less invasive than doing it on live video footage.)

Council of Europe’s AI Convention

In May 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the AI Convention),[6] described as the first internationally binding treaty on AI.

The AI Convention’s broad aim is to oblige all signatory states to adopt laws and administrative measures which ensure that the full lifecycle of any AI system is consistent with human rights, democracy, and the rule of law.[7]

  • Scope: These measures apply principally to the use of AI by public actors or private actors serving a public function, with less binding requirements for the design or use of AI solely by private actors.
  • Principles: State parties are obliged to enact legislation and other measures to ensure that the full lifecycle of such AI systems are consistent with their existing obligations to protect human rights, democratic process, and the rule of law, and conform to common principles for AI governance.[8]
  • Obligations: States are also obliged to provide for reasonable remedies and complaints mechanisms in the event that an AI system is inconsistent with these principles,[9] and adopt measures for AI impact assessments or to consider bans on certain AI systems that are inconsistent with these principles.[10]

The AI Convention will come into force once it has been ratified by five states, including at least three member states of the Council of Europe. No states had ratified at the time of this publication, but the Convention had received non-binding signatures from eight member states, including Norway and the UK, as well as from the US, Israel, and the EU.[11]

Human rights groups have criticised the AI Convention on a range of grounds, including its broad exemptions for AI systems used for national security and defence purposes, and for applying lower standards and protections to AI systems in the private sector.[12]


References

[1] The following analysis of the EU AI Act’s risk categories draws on a guidance note by the European Commission Directorate‑General for Communications Networks, Content and Technology.

[2] The full list of specified areas is as follows: Biometric identification and categorisation of natural persons; management and operation of critical infrastructure; education and vocational training; employment, worker management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum, and border control management; assistance in legal interpretation and application of the law.

[3] Leufer “Why human rights must be at the core of AI governance” Access Now (September 2024).

[4] European Center for Not-for-Profit Law “Packed with loopholes: why the AI Act fails to protect civic space and the rule of law” (April 2024).

[5] Article 19 “EU: AI Act fails to set gold standard for human rights” (April 2024).

[6] Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law CETS No. 225 (2024).

[7] Ibid, Article 1.

[8] These principles, outlined in Articles 7-14, are: Human dignity and individual autonomy; Transparency and oversight; Accountability and responsibility; Equality and non-discrimination; Privacy and personal data protection; Reliability; and Safe innovation.

[9] Ibid, Article 14.

[10] Ibid, Article 15.

[11] A schedule of non-binding signatures and ratifications for the AI Convention is accessible here.

[12] See European Network of National Human Rights Institutions “Statement of Concern on Draft Convention on AI” (March 2024).

Navigate sections

Download Part 1

Download pdf: Download Part 1

Explore the rest of the toolkit

Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY
Silhouettes of demonstrators are seen as they march around the Hungarian parliament to protest against Hungarian Prime Minister Viktor Orban and the latest anti-LGBTQ law in Budapest, Hungary, June 14, 2021. REUTERS/Marton Monus TPX IMAGES OF THE DAY