Foundation News Thursday, April 9 2026 08:45 GMT

Responsible AI adoption: What companies should know

As artificial intelligence becomes embedded across economies and workplaces, the question of how companies can adopt it responsibly – and protect people in the process – is more pressing than ever.

ND

Three key shifts are reshaping what ‘human rights in tech’ means in practice for companies.

1. Moving beyond ethical principles

For much of the past decade, technology governance has been dominated by high-level ethical principles. But these alone are no longer sufficient. Today, responsible AI adoption demands strong implementation and accountability mechanisms.

Regulators, courts and civil society have been framing technology harms as potential human rights violations rather than technical failures or unintended side effects. The focus is shifting to whether any potential harms were foreseeable, preventable and addressed when they happened.

To get ahead of this shift, companies can demonstrate that they have:

  • identified risks
  • taken meaningful steps to prevent harm, and
  • are able to help when people are adversely affected.

This marks a fundamental change where corporate responsible AI means having effective processes, decisions and accountability mechanisms throughout the AI lifecycle, from design and data sourcing to deployment and oversight.

2. Recognising the importance of data workers’ rights

Data workers, including those involved in data labelling, content moderation and other forms of digital labour, are essential to AI systems. Yet they are often exposed to precarious conditions, low pay, and psychological harm.

As scrutiny of AI supply chains grows, expectations are changing. Fair compensation, safe working environments, mental health support and transparency about how data labour is organised are now central to discussions about responsible AI.

Recognising data workers as skilled, value-creating contributors, strengthens both governance and outcomes for businesses.

Companies that address labour conditions in their AI supply chains are better positioned to build resilient and trustworthy technologies – while those that don’t, face reputational damage, regulatory scrutiny and systems built on unstable foundations.

3. AI in the workplace is a human rights issue

Concerns about surveillance, deskilling and loss of autonomy are reframing responsible AI adoption as a human rights issue. Workers and their representatives are demanding greater transparency about AI use, meaningful consultation on deployment decisions, and the ability to contest automated outcomes.

Organisations that fail to ensure meaningful worker involvement in corporate AI adoption risk reduced morale, talent retention challenges, reputational damage and operational inefficiencies.

Consultation with workers and meaningful oversight of existing AI systems are becoming central to responsible corporate AI adoption, alongside the recognition that technological change should augment human work and provide opportunities.

What should companies do now?

These shifts indicate that reactive compliance is no longer enough. Human rights due diligence in AI deployment should be treated as a continuous process. There are several steps that companies need to take to stay ahead:

1. Audit the human rights impact of your AI use

Companies can identify, assess, and mitigate risks associated with technologies across their full lifecycle, and adjust their design and deployment choices accordingly.

Our AI Company Data Initiative (AICDI) helps corporate leaders map how AI is developed and used across their operations. It supports businesses to identify potential human rights risks – such as labour conditions in data labelling, content moderation and outsourced digital work – and formulate a plan to address these.

2. Engage with stakeholders

The lived experience of workers, users, affected communities, unions, civil society organisations and subject-matter experts often reveal risks that technical teams cannot see alone.

3. Involve all teams in governance

Effective AI governance requires collaboration across engineering, product, procurement, HR, policy, legal and risk functions. Clear accountability at senior levels is key to ensure that risks to people and society can be identified and mitigated.

4. Design and deploy AI responsibly

Taking a responsible approach to AI leads to stronger outcomes in the long term. If companies embed fairness, explainability and human oversight from the outset, they future-proof themselves from regulation and future harms – especially in systems that affect employment, working conditions or access to opportunities.

5. Strengthen transparency

Transparency builds trust, but only when paired with meaningful avenues for redress. Companies can Companies can demonstrate meaningful transparency by publicly disclosing relevant policies and practices related to data sourcing, data labour, supply chains, and workplace AI.

6. Collaborate across ecosystems

No company can address these challenges alone – shared risks require collective solutions. Industry initiatives such as the AI Company Data Initiative can help establish and benchmark common standards for ethical AI, fair data labour and responsible workplace technology. Engagement with governments, unions, NGOs and researchers is equally important to shaping norms that protect workers and uphold human rights across complex technology ecosystems.

Responsible AI considerations are critical to successful AI adoption that can withstand the lightning pace of this technology. Companies that recognise this and act will be better equipped to harness the opportunities of AI and create technologies that serve the public interest.

Natalia Domagala, Responsible AI Advisor is an award-winning AI expert who develops strategies for ethical innovation.

AI Company Data Initiative

The AICDI supports responsible corporate AI adoption by giving companies the tools to map their AI usage and equipping investors to anticipate regulatory and liability pressures.
Read More
View All