Three key shifts are reshaping what ‘human rights in tech’ means in practice for companies.
1. Moving beyond ethical principles
For much of the past decade, technology governance has been dominated by high-level ethical principles. But these alone are no longer sufficient. Today, responsible AI adoption demands strong implementation and accountability mechanisms.
Regulators, courts and civil society have been framing technology harms as potential human rights violations rather than technical failures or unintended side effects. The focus is shifting to whether any potential harms were foreseeable, preventable and addressed when they happened.
To get ahead of this shift, companies can demonstrate that they have:
- identified risks
- taken meaningful steps to prevent harm, and
- are able to help when people are adversely affected.
This marks a fundamental change where corporate responsible AI means having effective processes, decisions and accountability mechanisms throughout the AI lifecycle, from design and data sourcing to deployment and oversight.
2. Recognising the importance of data workers’ rights
Data workers, including those involved in data labelling, content moderation and other forms of digital labour, are essential to AI systems. Yet they are often exposed to precarious conditions, low pay, and psychological harm.
As scrutiny of AI supply chains grows, expectations are changing. Fair compensation, safe working environments, mental health support and transparency about how data labour is organised are now central to discussions about responsible AI.
Recognising data workers as skilled, value-creating contributors, strengthens both governance and outcomes for businesses.
Companies that address labour conditions in their AI supply chains are better positioned to build resilient and trustworthy technologies – while those that don’t, face reputational damage, regulatory scrutiny and systems built on unstable foundations.
3. AI in the workplace is a human rights issue
Concerns about surveillance, deskilling and loss of autonomy are reframing responsible AI adoption as a human rights issue. Workers and their representatives are demanding greater transparency about AI use, meaningful consultation on deployment decisions, and the ability to contest automated outcomes.
Organisations that fail to ensure meaningful worker involvement in corporate AI adoption risk reduced morale, talent retention challenges, reputational damage and operational inefficiencies.
Consultation with workers and meaningful oversight of existing AI systems are becoming central to responsible corporate AI adoption, alongside the recognition that technological change should augment human work and provide opportunities.
What should companies do now?
These shifts indicate that reactive compliance is no longer enough. Human rights due diligence in AI deployment should be treated as a continuous process. There are several steps that companies need to take to stay ahead:
1. Audit the human rights impact of your AI use
Companies can identify, assess, and mitigate risks associated with technologies across their full lifecycle, and adjust their design and deployment choices accordingly.
Our AI Company Data Initiative (AICDI) helps corporate leaders map how AI is developed and used across their operations. It supports businesses to identify potential human rights risks – such as labour conditions in data labelling, content moderation and outsourced digital work – and formulate a plan to address these.
2. Engage with stakeholders
The lived experience of workers, users, affected communities, unions, civil society organisations and subject-matter experts often reveal risks that technical teams cannot see alone.
3. Involve all teams in governance
Effective AI governance requires collaboration across engineering, product, procurement, HR, policy, legal and risk functions. Clear accountability at senior levels is key to ensure that risks to people and society can be identified and mitigated.
4. Design and deploy AI responsibly
Taking a responsible approach to AI leads to stronger outcomes in the long term. If companies embed fairness, explainability and human oversight from the outset, they future-proof themselves from regulation and future harms – especially in systems that affect employment, working conditions or access to opportunities.
5. Strengthen transparency
Transparency builds trust, but only when paired with meaningful avenues for redress. Companies can Companies can demonstrate meaningful transparency by publicly disclosing relevant policies and practices related to data sourcing, data labour, supply chains, and workplace AI.
6. Collaborate across ecosystems
No company can address these challenges alone – shared risks require collective solutions. Industry initiatives such as the AI Company Data Initiative can help establish and benchmark common standards for ethical AI, fair data labour and responsible workplace technology. Engagement with governments, unions, NGOs and researchers is equally important to shaping norms that protect workers and uphold human rights across complex technology ecosystems.
Responsible AI considerations are critical to successful AI adoption that can withstand the lightning pace of this technology. Companies that recognise this and act will be better equipped to harness the opportunities of AI and create technologies that serve the public interest.

Natalia Domagala, Responsible AI Advisor is an award-winning AI expert who develops strategies for ethical innovation.
More from us
View All
AI Company Data Initiative

Responsible AI in practice: 2025 global insights from the AI Company Data Initiative
Bus…
Read More
World’s largest dataset shows companies are adopting AI much faster than they are governing it
The Thomson Reuters Foundation has released its findings on the global corporate…
Read More
Perugia International Journalism Festival 2026: Where to find us
This year, we’ll be discussing the…
Read More
Welcoming our new Director of Insights & Innovation, Rebecca Vincent
We are delighted to…
Read More
Supporting women’s participation in financial investigative journalism in sub-Saharan Africa
We supported women’s participation in business reporting and enhanced…
Read More
Welcoming our new chairman – and honouring a remarkable legacy
Our CEO, Antonio Zappulla,…
Read More
How the MFC Secretariat supports the Media Freedom Coalition to protect independent media at home and abroad
The MFC Secretariat, hosted by the Foundation, was awarded the Cross of Merit from the…
Read More
Statement on the Closure of the Context News Brand
A statement on the Closure of the Context News Brand from…
Read MoreSupporting media and CSOs to curb illicit financial flows across sub-Saharan Africa
We…
Read More
Our impact in 2025: Building resilience in a turbulent year
Our CEO Antonio Zappulla reflects on 2025:…
Read More
Uncovering illicit financial flows: Training that transformed one journalist’s approach to reporting
Find out how training from the Thomson Reuters Foundation transformed Fidelis…
Read More