When AI Sends a Ghost Tip: Newsrooms Confront Automation’s Limits

When the assignment appeared in the newsroom system shortly after 3 a.m., it looked routine enough: timestamped, tagged and labeled as a fresh lead generated by an automated assistant. But when an analyst opened the file, there was no scandal, no lawsuit, no policy change to chase — only a machine’s polite explanation of what it would need to see before it could write about a story that had never actually been described.

“There is no actual news lead or event contained in the material you provided,” the automated note read. It went on to explain that without basics such as a headline, date, location or people involved, “anything I tried to ‘research’ from this would be pure invention.”

The ghost assignment never turned into a published article. Yet the non‑story is a glimpse into how artificial intelligence is beginning to sit between newsrooms and the world they cover — and what can happen when those systems misfire or are fed incomplete information.

AI in the newsroom: promise, pressure, and failure modes

Across the industry, publishers are turning to AI to sift tips, summarize documents and draft background material under crushing economic and time pressure. The episode shows both the promise and the limits of those tools: the same systems that can speed up newsgathering can also generate confusion, or in some cases outright fiction, if human skepticism slips.

Over the past three years, news organizations from local papers to national brands have experimented with generative AI models that can write text, generate headlines and scan large datasets. Wire services have used machine learning for years to automate earnings summaries and sports box-score recaps. The latest wave of tools, powered by large language models, can do far more — and sometimes too much.

In late 2022 and early 2023, the technology news site CNET quietly used an AI system to help write dozens of personal finance explainers. After outside reporters scrutinized the articles, the outlet acknowledged factual errors and corrections in a number of pieces. The site’s then-editor-in-chief, Connie Guglielmo, wrote that the stories were “drafted using an internally designed AI engine” and edited by humans, but conceded that “we didn’t do it right” and pledged tighter oversight.

Another experiment drew backlash in August 2023, when Gannett paused automated high school sports recaps after readers circulated awkward, repetitive items that misidentified teams and read like template text. A Gannett spokesperson said the system was being “updated and improved,” and the company later resumed limited use.

Guardrails, and why they matter

Professional groups have started to respond. In August 2023, The Associated Press released guidance saying it does not use generative AI to create publishable news stories and warning staff not to rely on AI tools for “facts that have not been verified.” The AP, which has used automation for corporate earnings reports since 2014, said generative systems could help with tasks such as transcribing audio and drafting headlines, but insisted that “AP journalists must be in control of how AI is used in news gathering and production.”

Media ethicists say the ghost‑lead incident illustrates a narrower but related risk: automation bias, the tendency to trust the output of a machine simply because it came from a machine.

“If a system marks something as a high‑priority lead, there’s a strong pull to assume it’s real and important, even when the underlying information is thin,” said Nicholas Diakopoulos, a professor at Northwestern University who studies computational journalism. “The danger isn’t just hallucinated content. It’s also the time and attention newsrooms may waste chasing things that were never stories to begin with.”

The non‑lead was, in one sense, a best‑case failure. The AI tool refused to fabricate details, explicitly listing what it needed: a short description of the event, a time window, key people or organizations, a location and a source outlet or document. That list — essentially a restatement of the who, what, where, when and source that reporters have been taught for generations — highlights how even sophisticated systems still depend on clear human inputs.

“It shows that some tools can be configured to say, ‘I don’t know’ instead of hallucinating,” said Emily Bell, director of the Tow Center for Digital Journalism at Columbia University. “But it also exposes how brittle these chains can be. If one automated step mislabels a meta‑message as a lead, you can imagine how errors might cascade if there aren’t humans checking.”

When AI does not stop at “I don’t know”

The more worrying failures have come when AI has not stopped at saying it lacks information.

In late 2023, the sports media brand Sports Illustrated faced criticism after the tech outlet Futurism reported that some product reviews on its site appeared to have been generated by AI under fake author names with AI‑generated profile photos. The Arena Group, which then operated SI’s media business, said the content came from an external vendor and removed the articles, calling the fake profiles “a violation of our policies.”

Outside journalism, large language models have made headlines for fabricating legal citations. In 2023, a federal judge in New York sanctioned two lawyers after they filed a court brief that cited six nonexistent cases apparently generated by ChatGPT. The episode underscored how convincing AI‑generated text can be — and how essential independent verification remains.

Newsrooms that adopt AI are trying to build that verification into their workflows. Reuters, in an internal handbook, instructs staff that AI “cannot be treated as an authoritative source” and that all generated material must be “rigorously checked against primary sources.” The BBC has said it will not publish AI‑generated text unedited and will disclose significant uses of generative tools to audiences.

The invisible infrastructure shaping news

Still, much of the AI now in use sits far from public view, in systems that scan social networks for breaking news, rank potential stories or draft internal research memos. Those systems rarely get the kind of scrutiny that a fully automated article attracts.

“The invisible infrastructure is where we should probably be most concerned,” said Meredith Broussard, a data journalism professor at New York University and author of a book on AI and bias. “If an AI‑written story runs with a byline, readers can react and editors can correct. But if models are silently shaping what stories get pitched, what background lands in a reporter’s inbox or which communities are deemed newsworthy, that’s much harder to audit.”

The ghost lead shows how a small error in that infrastructure — an AI’s meta‑response being treated as a substantive tip — can ripple through.

For now, the safeguards are mostly cultural as much as technical: training reporters to treat AI outputs as starting points, not finished work; labeling machine‑generated notes clearly; and reinforcing old‑fashioned habits in a new setting.

The Society of Professional Journalists’ Code of Ethics, revised long before generative AI, urges reporters to “verify information before releasing it” and “never deliberately distort facts or context.” Media ethicists argue those principles apply just as strongly when the information comes from a machine.

“Journalism’s basic disciplines haven’t changed,” Diakopoulos said. “You still need to be able to answer who, what, where, when, why and how with reference to something that actually happened in the world.”

In the case of the 3 a.m. assignment, that question — what, exactly, happened? — had no answer. The system logged a lead, an analyst opened it, and the trail ended with a tool that, for once, declined to make something up.

As publishers lean more heavily on AI to spot and shape stories, the episode offers a quiet reminder. Machines can rank, summarize and suggest, but they cannot decide that reality has occurred. That task still falls to human journalists, whose first job is to ask whether there is a story at all.

Tags: #ai, #journalism, #newsrooms, #automation, #mediaethics