Foundation News Tuesday, March 31 2026 14:40 GMT

World’s largest dataset shows companies are adopting AI much faster than they are governing it

A visitor checks laptops during the opening day of Madrid’s International Data Processing, Multimedia and Communications SIMO Fair November 7, 2006. REUTERS/Victor Fraile (SPAIN)

The Thomson Reuters Foundation has released its findings on the global corporate adoption of AI, based on the AI Company Data Initiative (AICDI) – the world’s largest dataset on how companies are using AI.  

AICDI analysed publicly available information from almost 3,000 global companies across 11 sectors and found that there is a transparency gap between businesses’ ambition to harness the potential of AI, and the mechanisms in place to manage its risks. 

These findings suggest that the challenge of responsible AI is no longer awareness but ensuring principles translate into practice. Our AI Company Data Initiative provides a comparable, actionable dataset so that companies and investors can identify good practice and material risk.

KF

Katie FowlerDirector of Responsible Business, Thomson Reuters Foundation

Helping the workforce adapt to AI

AICDI data found that companies are not demonstrating how they are preparing employees to adapt to AI, potentially leaving themselves vulnerable to talent and productivity losses: 

  • Just over one in ten companies report that they had policies in place to mitigate the negative impacts of AI systems on workers 
  • A third of companies say that they offered some AI-related training to employees, and fewer of them measure the training outcomes, leaving many workers with piecemeal understanding of this technology and it may change jobs in the future. 

Ethical impact of AI

Companies share limited information publicly on the potential human rights impact of AI despite growing pressure from regulators, policymakers and campaigners. The policies analysed by AICDI frequently covered high-level principles but there was less specific evidence of how ethical risks were managed in practice. 

For example, roughly one in ten companies reported that they had a policy to ensure that a human oversees AI systems, while only 7% of companies reported assessing the human rights impact of AI. 

Governance transparency gap

There is also a transparency gap when it comes to what companies make public about how they govern their AI use.  

  • Nearly half of the companies sampled shared publicly that they have an AI strategy 
  • Just over one in ten companies publicly committed to a recognised governance framework or standard 
  • Less than a third have a dedicated team or resource for AI governance
  • While 15% of companies publicly stated the use of AI in HR processes, many do not communicate this matter publicly.  

This suggests that most companies’ AI strategies are focused on accelerating adoption and extracting value rather than on establishing robust governance, responsible AI commitments or future-proofing their workforce.   

Driving responsible AI through data

Grounded in the UNESCO Recommendation on the Ethics of AI – the leading global standard on AI ethics – and powered by the Foundation, our initiative is a framework to inform responsible investment decisions, and for companies to self-assess their AI adoption and mitigate risks. 

AICDI offers a free tool for companies to map where AI is used across products, operations and services, to help them implement comprehensive governance policies.  

The initiative is backed by a group of investors with combined assets under management of $1.2 trillion, to inform their stewardship. 

Empowered with this data, companies and investors are better equipped to navigate upcoming laws and audits as regulations catch up with AI technology. 

Download the full report to find out how companies are adapting to AI.

View All