A 2025 study highlights a paradox for newsrooms that use artificial intelligence. People want them to say when they use AI, but trust drops when they do.
The survey by Trusting News came out on the heels of an earlier one in 2024 that found 94% want journalists to disclose the use of AI. When news organizations did that, the new research found more than a third said they lost trust in the story.
"So, I will say personally, I was disappointed," said Lynn Walsh, associate director of Trusting News, the nonprofit journalist training organization behind the research.
"We went to news consumers and we said, ‘What do you want?’ If journalists are going to use this tool, how can they do it responsibly? And the first thing they said was, ‘Tell us you're using it.’ They want to know almost all the time,” she said.
So, she and Benjamin Toff, associate professor of Journalism & Mass Communication at the University of Minnesota, developed AI disclosure statements with the help of 10 newsrooms. They placed those statements in stories where journalists used AI and invited feedback.
“They were very thoughtful disclosures that talked about what AI did, [and] the fact that a human was always involved. They talked about the adherence to accuracy standards [and] ethics standards,” said Walsh. “So, these disclosures were a couple sentences, not just your basic ‘AI was used.’”
But then came the paradox. While 30% said they were more likely to trust the story because of the disclosure statement, 42% said they were less likely.
“What I think is happening is that, right now, people have very, very strong feelings about AI. Mostly negative," Walsh said. "Whether those are just negative feelings or confusion or misunderstanding, those feelings are overshadowing anything you're doing to kind of make someone be OK with it.”
The research found that people were generally comfortable with journalists’ use of AI for background work like transcription, but were less so with content creation, like writing stories or making images.
Walsh said the more specific the disclosures were about how and why AI was used, the more distrust went down. She said that’s a good suggestion for all journalists to follow.
“So, did it help you edit? Did it help you transcribe an interview? Then, when you used it, how did you make sure that you fact-checked or made sure it was ethical and accurate? And then talk about how it benefited the community. Did it allow you to go more in depth? Did it allow you to provide more content on more platforms? And then also we recommend linking to that AI ethics policy,” Walsh said.
AI policies are crucial. Sixty-two percent of those surveyed said news organizations should only use AI if they establish clear ethical guidelines and policies around its use. Notably, 30% said they thought newsrooms should not use AI at all, under any circumstances.
Walsh said journalists should be using AI, and they should always disclose that, even if the public’s current wariness of AI means it leads to a decrease in trust.
She also encouraged journalists and news organizations to talk to their audiences about AI and find out how they feel about it.
“The public's confused, they're scared. This is a technology that sometimes feels like it's being pushed on them,” she said. “And I feel like we really are at an opportunity as a society to decide, where do we want this technology to be used? How do we want it to be used? But if people don't understand it, they can't make those decisions or be part of those decisions.”
Beyond transparency on when and where AI is being used, Walsh had one last suggestion for journalists: Teach the public about AI, its benefits as well as its dangers. Otherwise, she said, why would a newsroom expect the public to trust any use of AI, including its own?