When AI Pitches the News: A Politics Desk Spikes Six Plausible Stories That Never Happened

On a recent afternoon, editors on one politics desk thought they had a scoop.

A briefing note described a new United Nations Security Council resolution — number 2813 — that had quietly shut down a monitoring mission in Yemen. Ending the United Nations Mission to Support the Hudaydah Agreement, known as UNMHA, would have signaled a major shift in one of the world’s most fragile conflicts.

The editors did what they always do with a claim that big: they went looking for the document. In the U.N.’s own database, Resolution 2813 did not exist. Nor did any official statement about UNMHA being terminated.

That discovery set off a broader review. By the end of the day, the newsroom had killed not one but six politics leads — all generated with the help of artificial intelligence tools — after basic checks showed the events either never happened, could not be traced to any primary record, or had been embellished beyond recognition.

The stories were spiked before a single word went to readers. The episode has become a test case in how AI is changing the front end of political reporting, and how traditional verification still decides what makes the news.


Six stories that never were

The discarded ideas spanned four continents.

One proposed piece centered on a new “Venezuela intervention,” suggesting fresh international military involvement in the country’s long-running political crisis. Another described “trilateral talks” in Abu Dhabi, portraying a three-way diplomatic negotiation with specific concessions and outcomes.

A third pitched story claimed that leaders at the World Economic Forum in Davos had signed a charter establishing a “Board of Peace,” a new global body for conflict resolution. A fourth revolved around the nonexistent Resolution 2813 and UNMHA’s alleged closure.

The remaining two focused on defense and the Arctic. One asserted that the U.S. Department of Defense had released a 2026 National Defense Strategy featuring an officially dubbed “Trump Corollary,” a doctrine naming former President Donald Trump. The last suggested an acute “Greenland military crisis,” with troop movements and military standoffs in the North Atlantic.

On the surface, each idea sounded plausible. Venezuela has endured years of economic collapse, sanctions and contested elections. The United Arab Emirates frequently hosts high-level diplomatic meetings in Abu Dhabi. Davos is known for grand initiatives launched with lofty titles. The United States does publish periodic National Defense Strategies. The Arctic, including Greenland, has become a focal point of competition among the United States, Russia and others.

But plausibility was where the stories ended.

Editors and researchers cross-checked each lead against sources they would use for any foreign or defense story: official government statements, U.N. documents, major wire services and established regional and global outlets.

The U.N. document system showed no Resolution 2813 related to Yemen or UNMHA and no record that the mission had been wound down. The World Economic Forum’s own programs and press releases for Davos contained no reference to any formally chartered “Board of Peace.” The Pentagon had not published a 2026 National Defense Strategy, let alone one with a section officially labeled a “Trump Corollary.”

If a true military crisis had erupted around Greenland, Danish and Greenlandic authorities, NATO and the U.S. Defense Department would almost certainly have said so. There were statements and long-running analyses about Arctic security and Greenland’s strategic role, but no sign of a single triggering incident of the scale implied by the word “crisis.”

As for Venezuela and the “Abu Dhabi trilateral talks,” basic requirements for hard news — a specific date, identified governments, traceable communiques or on-the-record confirmation — were missing. No corresponding events appeared in the feeds of agencies such as Reuters, the Associated Press, Agence France-Presse or in major Latin American and Gulf outlets.

“On sensitive beats like war and foreign policy, the bar is simple but high: if we can’t find the documents, the statements or the independent reporting that would have to exist, we don’t have a story,” one senior editor said. “In these cases, that trail was either broken or never there.”


How AI helps — and hallucinates

None of the six ideas originated with a human source claiming inside knowledge. They emerged during an experiment in using large language model tools as part of the story-generation process, editors said, with the systems prompted to suggest newsworthy angles based on recent global developments.

Tools built on large language models are trained to predict the next word in a sequence, drawing on vast amounts of text. They are not databases in the traditional sense. When pushed for specifics — resolution numbers, the names of institutions, formal doctrines — they can generate details that sound authoritative but are not anchored in any real-world record.

Researchers and technologists refer to this as “hallucination”: confident, fluent answers that are false.

The patterns in the six rejected leads match what experts have been warning about since such systems entered public use in late 2022. They combined true elements — UNMHA’s existence, the numbering style of Security Council resolutions, the fact of Davos panels on peace, the reality of Arctic militarization — into convincing composites that had never actually occurred.

“In complex domains like international law or diplomacy, these models are especially prone to make things up,” said Emily Bell, founding director of the Tow Center for Digital Journalism at Columbia University, in a 2023 lecture on AI and news. “They know the shapes of press releases and resolutions but have no grounded understanding of what has been published or agreed.”

Courts have already seen the consequences. In 2023, a federal judge in New York sanctioned two lawyers after they filed a brief citing six purported court decisions that did not exist. The opinions had been generated by an AI system. “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” U.S. District Judge P. Kevin Castel wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

Newsrooms are facing a similar gatekeeping test.


Inside the verification grind

The episode prompted editors to spell out, in more formal terms, how AI-assisted ideas would be handled on the politics desk.

In practice, the process did not differ from the way unverified human tips are treated.

For each claim, reporters first identified what kind of fact they were dealing with. A U.N. resolution or a national strategy is a document; a summit, intervention or crisis is an event that should leave a trail of official statements; a “Board of Peace” is an institution or initiative that, if real, would have a web presence and named participants.

Primary sources came next: the U.N. document system and press office; government ministries of foreign affairs and defense; press releases from organizations such as NATO, the Organization of American States and the World Economic Forum; and, for the U.S. defense strategy claim, the Defense Department’s own publications.

Secondary checks included wires and reputable outlets with strong foreign coverage. If a development as significant as a new intervention in Venezuela or the termination of a U.N. mission had taken place, multiple independent reports would be expected.

“When an alleged event is the kind of thing that cannot plausibly be secret — a public resolution, a formal doctrine, a crisis involving troops — the absence of evidence across primary and secondary sources becomes evidence of absence,” a researcher on the desk said.

All six leads failed that test.

Editors documented the checks and the reasons for rejecting each pitch. They also decided that they would not try to rework the flawed ideas into softer “trend” pieces, to avoid importing any unverified framing into coverage of real-world issues such as Yemen’s cease-fire or Arctic security.


Stakes for public trust and foreign policy

The internal decision to throw out a week’s worth of politics leads reflects a broader concern in the industry: how to adopt new technology without undermining already fragile public trust.

Polls in recent years have shown declining confidence in the news media in many democracies. At the same time, governments and independent researchers have documented the use of fabricated stories and manipulated media by state and nonstate actors seeking to influence elections, move financial markets or justify military action.

Stories about wars, sanctions, peace talks and military crises are particularly sensitive. A false report of a new U.S. doctrine could be used by foreign officials to claim Washington had shifted policy. An invented “Greenland crisis” could be cited in domestic debates over defense spending. A fictional Davos “Board of Peace” might be used to argue that meaningful diplomacy is under way when little has changed.

“The danger is not just that people get a fact wrong,” said Claire Wardle, co-founder of the information research group First Draft, in a 2021 interview about disinformation. “It’s that this erodes trust in all information, making it harder for the public to know what to believe when something truly significant happens.”


Drawing new lines for AI in the newsroom

In response, editors involved in the six killed stories have set out clearer internal rules.

AI tools may be used to summarize documents, assemble timelines from verified sources, and suggest general areas that might merit reporting. They are not to be treated as sources of record.

No AI-generated details — names of institutions, document numbers, descriptions of summits or interventions, doctrinal labels — can be published without independent human verification. Claims about “secret” meetings or policies must be corroborated through traditional reporting before they become stories.

Prompts and outputs related to politically sensitive coverage are being logged so that any discrepancies can be traced and used as training moments.

The Society of Professional Journalists’ Code of Ethics, last revised in 2014, puts the principle in simple terms: “Take responsibility for the accuracy of their work. Verify information before releasing it.” Editors say the technology may be new, but the standard is not.

For readers, the result may be stories that arrive more slowly, and that sometimes say less than the wildest rumors circulating online. For newsrooms, the calculation is straightforward.

There will be no article on Resolution 2813, no Davos “Board of Peace,” no “Trump Corollary” to a strategy that does not exist, and no breathless coverage of a Greenland flashpoint that never occurred. Instead, reporters will continue to follow the real developments in Venezuela’s standoff, Yemen’s fragile cease-fire, evolving U.S. defense planning and the slow, visible militarization of the Arctic — anchored in documents, sources and events that can be checked.

In an age when machines can generate a convincing geopolitical thriller in seconds, the most consequential editorial decision may be the one that never reaches the page: the choice to say no to a story that is dramatic, timely and not true.

Tags: #ai, #journalism, #factchecking, #disinformation, #unitednations