By Alexandra Borchardt

Provocatively put, journalism and generative AI contradict each other: Journalism is about facts; generative AI calculates probabilities. Or maybe you want reporters to fill in the blanks of a story with anything that just sounds likely? Because that is exactly how generative AI works. Nevertheless, GenAI opens up immense opportunities to enhance journalism, ranging from its use in brainstorming ideas, interview questions and headlines, to its role in data journalism and speedy document analysis. It can also help to transcend formats and languages, and it can turn texts into videos, podcasts, and visuals, transcribe, translate, illustrate, and make content accessible in chat formats. These aspects might help to reach people who have been previously underserved: hyperlocal audiences, those who lack proficiency in reading or comprehension, or who are otherwise impaired, and those who are just not interested in consuming journalism in the traditional way. As Ezra Eeman, Strategy & Innovation Director at NPO, the Dutch public broadcaster, says: 'With generative AI, we can fulfil our public service mission better; it will enhance interactivity, accessibility, and creativity. AI helps us to bring more of our content to our audiences.'

But while some in the industry are clearly already drunk on the promises of generative AI, this technology poses considerable risks for journalism. The two most important ones are a general loss of trust in information, and the further erosion or even disappearance of its business models. As already mentioned, 'hallucinations' – the term used for generative AI’s tendency to fabricate answers, coming up with fact- and source-lookalikes – are actually a feature of the technology rather than a bug. But the challenge goes deeper. Since GenAI enables anyone, within minutes, to create any kind of content including deep fakes, the danger is that the public might lose trust in all of the content that is out there. Media literacy training already advises everyone to be sceptical of content found online; this healthy scepticism might turn into outright distrust when content fabrication amplifies. There is no telling yet whether traditional media brands will profit from being guiding posts in this information world or whether all media will be deemed untrustworthy in this context.

The onslaught of generative searches adds to this calamity, since it threatens to make journalism increasingly invisible. Whereas in the past a Google search provided a set of links, many of them connecting to trustworthy media brands, search output is now increasingly shaped by GenAI. People get to see first-level responses in text form; they don’t even have to dig deeper any longer. No wonder that media executives are terrified. Many of them are rushing into implementing AI for efficiency increases, which obviously won’t do the job when what is needed would be even more investment in quality journalism to show audiences the differences between just 'content', on the one hand, and well-researched, accurate and reliable journalism, on the other.

An ethical approach to using AI in the media is called for. First, media organisations need an AI strategy and to focus on what the technology can contribute to delivering public service value. Resources need to be focused on what’s desirable and implemented accordingly – always in the awareness that AI has a considerable environmental and societal cost. Saying no should always be an option. Organisations should also use their power and influence when purchasing products, lobbying for regulation, and getting involved in copyright and data protection debates. There is a lot at stake. It is imperative for every company to regularly scrutinise the products they use for biases and stereotypes to avoid the amplification of harm. Lastly, in this rapidly changing environment with new products being churned out every day, walking alone is dangerous. Engaging in and promoting collaborations within the industry and between the industry and tech companies is essential for charting responsible paths forward.

But there is no doubt, GenAI will increase the media’s dependence on big tech by many degrees. The more tech companies integrate AI-tools into applications people use in their daily lives, the less control media organisations will have over practices, processes and products. Their ethical guidelines might then be just an add-on to something that has long been decided elsewhere.

Given all of this, the following hypothesis might come somewhat as a surprise: Tomorrow's journalism might look a lot like yesterday's – and hopefully better. But part of today's journalism will disappear. Like it has always been, journalism will be about facts, surprises, storytelling, and holding power to account. It will be about building stable, loyal, trusted relationships with audiences by providing guidance, leading conversations and supporting communities. In a world of artificial content, what real people say, think, and feel will be at a premium. Reporters are uniquely equipped to uncover this. But AI can help journalism to do better: to serve individuals and groups according to their needs and life situations; to become more inclusive, local, and enriched with data in ways that weren’t affordable before. As Anne Lagercrantz, Vice-CEO of Swedish Television, has commented about AI: 'It will fundamentally change journalism but hopefully not our role in society. We have to work on the credibility of the media industry. We need to create safe places for information.' It is safe to conclude that the AI age poses the greatest risks not for journalism itself, but for its business models.

This text is based on the free-to-download report on 'Trusted Journalism in the Age of Generative AI', published by the European Broadcasting Union in 2024, researched and written by Dr Alexandra Borchardt, Kati Bremme, Dr Felix Simon and Olle Zachrison.