AI for Newsrooms
Real-world AI projects from newsrooms worldwide, curated news about AI in newsrooms, case studies, reports, researches and guides to help journalists and news organizations understand and implement AI.
AI for Newsrooms. News, projects, reports, guides:
An AI Upheaval Is Coming for Media
Journalist Nick Lichtenberg uses AI tools to produce a high volume of stories quickly. He uploads press releases and analyst notes into AI tools, prompting them to generate articles that he can edit and publish. In six months, Lichtenberg produced more stories than any of his colleagues at Fortune did in a year, cranking out seven articles in one day. His approach involves AI playing a leading role in researching and writing stories, a method some view as unconventional in journalism. Lichtenberg's output has been exceptionally high, making him an outlier in the field.
The Wall Street Journal
Systems over funnels: INMA Media Subscriptions Summit offers winning strategies for 2026
Audience-first companies are shifting from traditional funnels to systems where reach, engagement, and retention drive revenue. Top takeaways from the INMA Media Subscriptions Summit include: reinventing the funnel to focus on distributed systems and high-value users; challenging assumptions around products, pricing, and organizational structures; and bridging strategy and execution by making data actionable in newsrooms. Companies are reorganizing to integrate subscription and marketing functions, using data-informed editorial decisions, and leveraging AI to drive growth and revenue. This app
INMA
Github
News chatbots: hear from early adopters
This webinar, recorded on March 20, 2026 examines how European media organisations are developing AI-powered news chatbots and addressing concerns related to accuracy, sourcing and user trust. It brings together representatives from several European media organisations that are experimenting with AI-driven news assistants.
📺 ChatEurope
LLMs Can’t Provide Faithful Explanations Needed for AI Accountability
Large Language Models (LLMs) are unable to provide faithful explanations for their decisions, which is crucial for AI accountability. Faithful explanations accurately represent the reasoning behind a model's output, and their absence can mislead decision-making and hinder efforts to assign blame or correct mistakes. Research has shown that LLMs' explanations are often inaccurate, with larger models producing more faithful explanations but still exhibiting high variance across tasks. As a result, policymakers may need to establish thresholds for when models can be used in high-stakes contexts and develop standardized benchmarks for faithfulness to support accountability.
AI Accountability Review
Automatic speech recognition in Python
Audio files transcription using NVIDIA's Parakeet family of multilingual models with fallback to Whisper.cpp for languages outside Parakeet's scope.
Github
Weekly Newsletter.
One Friday email that rounds up AI for Newsroom.
Chat with AI For Newsroom
Let's Build the AI Index for Newsrooms
Submit AI initiatives your newsroom is working on, resources, policies, or media content to help journalists and newsrooms worldwide stay ahead in the AI era.