August 14, 2025
New Frontiers in Journalism: How Transparency and AI Are Transforming the Newsroom

In this two-part series, journalism professor and former Wall Street Journal reporter Amy Merrick unpacks two influential trends reshaping the media landscape: source transparency and generative AI. From The Washington Post’s hotly debated “From the Source” pilot to the opportunities and risks of AI in journalism, the conversation examines how newsrooms are experimenting with new ways to build trust, efficiency and engagement in an era of rapid technological change.
AI in Journalism: Why Reporters and News Outlets Must Get in the Game—Or Risk Falling Behind.
This is the second article in the series. You can view the first article here.
Generative AI is reshaping how stories are discovered, told, and trusted. At Greentarget, we’ve spent a lot of time thinking about what that means for content in the age A—including how it impacts our professional services clients when it comes to earned media.
As news outlets grapple with the challenges and opportunities that AI presents, we sat down to talk about its impacts on the current journalism landscape with journalism educator Amy Merrick, who brings insights from the newsroom and the classroom to one of the most complex—and fast-moving—technological shifts facing media today. A former reporter for The Wall Street Journal, Merrick is a senior professional lecturer at DePaul University’s College of Communication.
AI’s impact on journalism aren’t just part of her syllabus—Merrick recently enrolled in a master’s program in computer science in order to understand what AI means for journalists. In the second part of our Q&A (the first focused on The Washington Post’s “From the Source” program), we explore why she decided to dive into AI headfirst and the advantages and risks she sees for journalists.
————-
Lisa Seidenberg: Let’s switch gears to discuss AI and your decision to pursue a master’s in computer science. How’s it going so far?
Amy Merrick: I started the program because I genuinely believe that generative AI will impact everything I do, including media and education. You hear people say that we need participation from a wide range of industries and skill levels. And I agree. People from diverse backgrounds need to be part of the conversation, especially when it comes to ethical implications.
Eventually, I thought, ‘Okay, I keep saying this; maybe I need actually to be one of those people participating.’ I needed to get a lot more educated to do that helpfully. It’s easy to critique something you don’t understand, but it’s way more helpful to understand it from the inside. Why does it have particular strengths and weaknesses? What aspects of the system’s architecture lead to these issues, and what kind of systems do we want in the future?
One of the most incredible things so far is that computers no longer feel like magic. Understanding how programming and computers work helps demystify the process, and it gives me more confidence to join conversations about this topic.
There’s a lot more to come, but I’m happy I’m learning.
LS: I was especially interested in your analysis about why you started the program, especially your thoughts on generative AI and how it’s going to change journalism—both the good and the bad. You mentioned some examples in your LinkedIn post. What else are you seeing in the media landscape related to AI’s impact on journalism right now?
AM: There are lots of experiments happening right now. The Associated Press, for example, has documented pilot projects that utilize AI to assist reporters. One tool scans city council meeting transcripts for keywords and alerts reporters. Anyone who covers local news knows how challenging it is to keep up with all those meetings, but the information can be essential.
There are other tools for sending push notifications about weather alerts, and data journalists are finding AI helpful for digging through massive datasets they couldn’t manage on their own. Of course, humans still need to fact-check and be transparent about the limits of these tools, but the potential for data journalism is huge.
What surprises me on the downside is how fast some outlets jumped to using generative AI for writing stories. That’s one of the worst uses for the technology right now.
LS: Interesting–can you expand upon that?
AM: The way large language models work is basically by predicting the next word in a sequence. To prevent things from becoming too repetitive, they introduce some randomness, which can lead to creative or unexpected results, sometimes cool, sometimes not.
But when you’re writing journalism, you can’t have the AI making things up or suggesting stuff that isn’t factual. There have been cases, such as the one with the Chicago Sun-Times, where AI-generated fake book recommendations sounded plausible because they matched themes that those authors write about. However, those books weren’t real, and no one checked. It could have happened anywhere, but it became a symbol of how AI can mislead publishers.
That’s tricky because you have to know a lot about the topic to catch those errors or do very thorough fact-checking. I’m surprised by how quickly some places have jumped into using generative AI for writing, considering it’s not yet ready for that purpose.
LS: I also wanted to get your take on the policy Law360 put in place requiring all stories pass through an AI bias detection tool. Did you see that?
AM: I did! It was interesting. I conducted a small experiment in one of my classes, where students used AI to assess a news story for bias and then compared the AI’s suggestions with their partner’s thoughts. Overall, the class felt AI made some valuable suggestions. I don’t think using AI for bias detection is a crazy idea. But I was surprised they mandated it so soon, given how new and untested it is. I get why staff pushed back.
LS: Can we discuss this pushback further? What were the reporter’s concerns?
AM: The AI tool tends to suggest toning down language about wrongdoing to sound more neutral. But for a law publication, if a judge or jury found evidence of wrongdoing, you need to communicate that. So, the AI’s tendency to neutralize could interfere with accurate reporting. I like the idea of pilots and experiments, but mandating it so early seems premature. Maybe management thought they had to push it for people to try it, but trust-building between staff and management usually works better than mandates with new, untested tools.
It’s like what we said earlier. AI tools can only be practical if they keep pace with the specialized knowledge reporters bring. The tools have to make the work better, or they’re not worth it.
Change is hard for everyone, including me. There’s always some pushback, which is a healthy sign. But you need a process of testing, refining, getting people on board. If everyone resists, you have to figure out why.
LS: PR software providers Muckrack and Cision recently released their 2025 reports on the state of journalism and media, both of which delve into how reporters are utilizing AI. ChatGPT is the most widely adopted tool, according to Muckrack—it’s used by 42% of respondents—and transcription and writing tools like Grammarly were the next-most popular, used by 40% and 35%, respectively. What’s your reaction to these findings? Do they align with what you’re seeing?
AM: Transcription is probably the least controversial use, and it’s the most common, too. Tools like Otter have been utilizing AI for some time now, and the results are pretty impressive. Additionally, you always have the original audio to refer back to if you need to verify something.
Even for our student magazine, Fourteen East, which I advise, whoever is fact-checking a story will compare the transcript and the audio for any quoted material. So, AI saves a lot of time; transcribing manually can be incredibly slow.
We touched on writing earlier, but I don’t use AI to generate first drafts, and I ask my students not to either. Writing is an integral part of the thinking process. It helps you figure out what you think.
And I also worry about anchoring bias. Once you’ve got a draft, even if it came from AI, there’s a tendency to commit to it mentally, which can limit creativity and critical thinking. It’s harder to deviate from that first version.
For journalists, there are often legal and ethical concerns about inputting proprietary or unpublished material into AI tools, particularly if the data is used to train the model. Coming from The Wall Street Journal, where insider trading is a concern, we had to be extra careful. So that’s always in the back of my mind.
That said, people should experiment with AI, even in small ways. Try it out in everyday, low-stakes situations — even something like figuring out what to plant in your backyard! That way, even if you don’t end up using it professionally, you understand what it does, what you like or don’t like about it, and can speak knowledgeably about it.
Because AI isn’t going anywhere, and if you’re not using it at all, it’s harder to be part of the larger conversation around it, and that’s a conversation more people should be part of.
So, yeah, low-stakes experiments, that’s where I’d love to see more focus. It’s how we start to understand the strengths and limitations of these tools, and how they can (or can’t) be useful in different contexts.
LS: That makes sense. Is there anything you’re watching closely right now or thinking about in terms of where this is all headed?
AM: I’d love to see more people from a broader range of backgrounds and industries involved in shaping the direction AI goes. Not just people in Silicon Valley or working in tech. These tools will impact numerous aspects of life and society, so the more diverse perspectives we have in the room, the better.
At Greentarget, we help clients lead with authority in a media environment where technology, misinformation, and shifting trust are resetting the terms of engagement at lightning speed. If you’re exploring how AI could shape your communications, newsroom relationships, or thought leadership strategy, we’re here to guide you.
Learn how Greentarget helps organizations navigate the future of media with clarity, credibility, and purpose. Get in touch.