Panic In The Newsroom
An international study by the European Broadcasting Union raises alarm.
New research sheds light on how AI assistants like ChatGPT, Gemini, and Perplexity misrepresent news on a regular basis and can even be used to amplify disinformation campaigns.
↓ Go deeper (9 min)
Democracy requires informed citizens and healthy institutions. The two are interdependent: you can have free and fair elections, but without independent media, democracy dies in darkness.
While I hate to be the bearer of bad news, it wouldn’t be an overstatement to say that generative AI is affecting journalism and the information ecosphere as a whole.
Especially after reading a recent international study, the largest of its kind, being published by the European Broadcasting Union (EBU), which suggests AI assistants like ChatGPT, Gemini and Perplexity could be misrepresenting news up to 45% of the time.
Simply put, it’s panic in the newsroom.
AI summaries aren’t as inaccurate as they appear
Unsurprisingly, the use of AI assistants as a channel to get your news from is growing. AI assistants like ChatGPT, Google Gemini, and Perplexity provide millions of users with summaries of news articles on daily basis.
As it turns out, it’s a bit like getting your news from Facebook: not to be trusted.
Professional journalists evaluated, on behalf of the EBU, more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against a handful of acceptance criteria and found that 45% of all AI summaries had at least one meaningful issue. Issues ranged from factual inaccuracies to attribution errors to opinions presented as facts.
This isn’t a huge surprise if you’ve been following the headlines, like when Apple pulled its AI-generated news summaries, or when Wired and Business Insider removed AI-written articles. (Something similar happened in my home country, The Netherlands, when magazine Elle got caught with their hands in the cookie jar, by publishing entire fake articles by non-existing authors.)
What’s so frustrating is that despite the errors, many users perceive news delivered by AI to be trustworthy. This is in part because these chatbots list their sources, which further grants them authority.
Complimentary research published by the BBC and Ipsos only adds fuel to the fire. When made aware of the presence of errors in AI-generated news summaries, people don’t just blame the AI for the error, they blame the news outlet. More than 1 in 3 people instinctively agree the news source should be held responsible for errors, even if those mistakes are the product of the AI. That’s guilt-by-association. And the problem with that is, of course, that it further dilutes people’s trust in the journalistic enterprise.
Deepfakes, AI-infused propaganda, and Sora 2
If I’m being honest, though, AI-generated news summaries aren’t in my top 3 worries.
NewsGuard, an organization that publishes investigative journalism into mis- and disinformation, have been reporting on AI extensively. Featured in The New York Times was a story about how LLMs can internalize state propaganda and Axios broke their story about a Russian disinformation effort that flooded the web with false claims, which then got amplified by major AI chatbots.
Another worrisome trend is the shocking progress we’re currently seeing in video generation models like Google’s Veo 3 and OpenAI’s Sora 2.
The latter, launched as a standalone app last month, has gone viral due to its ability to make hyper-realistic videos of celebrities and historical figures. For example, of Dr Martin Luther King Jr., featuring in bizarre and often deeply offensive scenarios. (Only after a request from the Martin Luther King estate, OpenAI blocked Sora users from creating deepfake videos portraying Martin Luther King)
Meanwhile, distasteful videos of the late Robin Williams, John F. Kennedy, Queen Elizabeth, and Stephen Hawking continue to being shared widely online.
NewsGuard also reported their first findings on Sora 2, demonstrating it can easily be used to generate videos designed to spread fake news.
Oh, and have you heard? The next generation of Google’s text-to-image generation model may also arrive any day now, and if you think AI-generated images couldn’t get more photorealistic, I regret to inform you: you don’t know half of it.
We’re on the verge of a deepfake epidemic, as per Alberto Romero:
Don’t don’t panic
I do not want to sound alarmist. And if I were to add nuance, it would be that the goal of politically-inspired deepfakes — according to Dutch mis- and disinformation expert and journalist Menno van den Bos — is usually not to convince people of an untruth. It’s to evoke an emotion. To make people angry or scared. The emphasis is often on mobilizing and inciting one’s own political base, instead of changing people’s beliefs.
This metaphorical power, however, is only increasing now that the images and videos become more realistic than ever.
Therefore, I think it perfectly reasonable to take this threat seriously and to, for lack of a better word, panic.
Especially considering that besides news, medical misinformation is on the rise, and so are AI-powered social media scams. Elon Musk thought it would be smart to launch Grokipedia, a weird Wikipedia clone powered by AI, and its right-wing bias is glaring. And science is under fire, too.
We know fiction spreads faster than fact, simply because the amount of energy needed to refute bullshit is an order of magnitude bigger than the effort to produce it. Laws should be put in place to hold AI companies accountable, but in the absence of meaningful legislation, what can we do if debunking is futile?
Turns out, we should engage in pre-debunking. It’s the idea that you prepare people to practice skills to spot and resist mis- and disinformation, before they’re exposed to the real thing. Right now, we should be educating every adult, adolescent and child, on AI-powered mis- and disinformation, within classrooms, workplaces and at the kitchen table.
So if you know what to look out for, it’s probably time to sit down with your grandparents, parents, brothers and sisters, nieces and nephews — and educate them.
Truth, like peace, requires vigilance. We’re going to need everybody, working together, if we want to steer away from a future where people retract into comfortable fictions and instead build herd immunity through the hard work that is critical thinking.
Join the resistance,
— Jurgen



Regarding content creation, it's not very different from what happens with movies. Even knowing it's fiction, it evokes emotion. Or soap operas, very popular in my country, capable of impacting customs and habits. The big difference is the ease of production and dissemination. However, "validation" by AIs (perceived as authorities) makes the whole thing more complex. As I often say in my studies, our reputation will be mediated by algorithms in this next decade.
Regarding the topic of the article, your profound insights into AI's impact on journalism are truly invaluable, and I believe further attention should also be directed towards the data provenance and ethical guidelines govering the development of these systems.