This practical starter guide is designed to help newsrooms identify ethical risks in their AI applications and take action to mitigate these.
Overview
Our research with journalists in the Global South and emerging economies shows that while 81% of journalists are already using AI in their work, only 13% have established AI policies.
Protecting journalism in the AI age means adopting it responsibly. This guide is intended as a starting point for ongoing conversations within your organisation on how to use AI while upholding journalistic values—accuracy, fairness, transparency, and accountability. These principles serve as essential pillars for informed communities worldwide, allowing journalism to fulfil its vital role in society.
“AI is only a threat if we do not learn how to use it.”
Journalist, Kenya
Why it matters
We are witnessing a major transformation in how news is created, distributed, and consumed. The adoption of AI in journalism offers both opportunities and risks for newsrooms worldwide.
AI tools can improve the production and distribution of news, speed up fact-checking, and help resource-limited newsrooms do more. Yet AI tools pose many risks.
When newsrooms use generative AI without proper human oversight, they risk publishing errors that directly damage their reputation and reader trust—potentially driving away subscribers and making advertisers, funders, or sponsors think twice.
Similarly, AI recommendation systems that chase clicks and engagement can end up burying newsrooms’ best work. Those investigative stories your team spent months researching—about corruption or human rights abuses—might get pushed aside for lighter or viral content, essentially wasting your newsroom’s limited resources and diluting what makes your journalism stand out.
For news organisations already struggling financially, these AI missteps can directly hit your revenue, weaken your competitive edge, and even create legal headaches.
Getting AI right not only upholds journalistic standards and ethics, but also can protect your newsroom’s financial health, reputation, and future in an increasingly challenging media environment.
By developing responsible AI practices now, your newsroom will be better positioned to adapt to future technological developments while maintaining trust with your audience.
Why addressing AI risks is important for your newsroom
When thoughtfully integrated, AI tools can amplify your newsroom’s capabilities while preserving the human judgment that defines quality journalism.
Reputation and revenue
Using AI without proper human oversight risks publishing errors that damage your outlet’s credibility, leading to subscription losses and advertiser hesitation.
When readers lose trust, the financial consequences are often immediate and lasting.
Content value and visibility
AI recommendation systems that are optimised solely for engagement can bury your most important journalism.
Your newsroom invests substantial resources in investigative reporting. AI systems should be configured to properly showcase this valuable content, not hide it.
Strategic advantage
Implementing AI responsibly enhances efficiency while maintaining the journalistic standards that distinguish your newsroom.
News organisations that master ethical AI use gain competitive edge in operational capacity and audience trust.
Legal protection
Publishing AI-generated inaccuracies increases liability exposure.
Clear AI governance protocols protect your organisation from potential lawsuits.
How to use this guide
Each step in this guide builds on the previous one, helping you move from understanding your current AI usage to implementing responsible practices.
- Step 1: Identify your AI tools
- Step 2: Map the risks and solutions
- Step 3: Integrate AI guidelines into your editorial policies
We recommend involving journalists, editors, and technical staff in this process to ensure capturing diverse perspectives.
Related resources
View allJournalism in the AI Era: A TRF Insights survey
Our new report shines a spotlight on journalism in the AI era and…
Read MoreTowards Algorithmic Transparency in the Public Sector in Latin America
This report assesses the…
Read More2024 AI Governance for Africa Toolkit
This toolkit unpacks the context of AI governance, in Africa and globally, and…
Read MoreBalancing progress and human rights: Is Thailand ready for Artificial Intelligence that respects human rights?
This study examines Thailand’s AI and technology landscape through the lens of human rights.
Read MoreRegulatory Mapping on Artificial Intelligence in Latin America
Latin America has seen several efforts to…
Read MoreAI Governance for Africa Toolkit – Regional and International Frameworks
As AI systems…
Read MoreAI Governance for Africa Toolkit – Building Advocacy Strategies
As AI systems become…
Read More