×
Wikipedia pauses AI summaries after editors call them threat to credibility
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Wikipedia has paused its AI-generated summary trial after just two days following harsh criticism from the platform’s volunteer editors. The Wikimedia Foundation, the non-profit that runs Wikipedia, launched the feature on June 2 for 10% of mobile users, but editor backlash over accuracy and credibility concerns forced an immediate halt to what was planned as a two-week experiment.

What you should know: The AI summaries appeared at the top of Wikipedia articles with yellow “unverified” labels, generated by Cohere Labs’ open-weight Aya model.

  • Users had to tap to expand and read the summaries, which were only visible to those with the Wikipedia browser extension who opted into the trial.
  • The feature was designed to provide “simple summaries” for mobile users, but editors viewed it as a threat to Wikipedia’s reputation for reliability.

What editors are saying: Wikipedia’s volunteer community delivered scathing feedback about the potential damage to the platform’s credibility.

  • “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source,” said one editor, adding “Let’s not insult our readers’ intelligence and join the stampede to roll out flashy AI summaries.”
  • Another editor criticized the lack of editorial oversight: “With Simple Article Summaries, you propose giving one singular editor with known reliability, and NPOV issues a platform at the very top of any given article whilst giving zero editorial control to others.”
  • Other reactions included “Yuck,” “Absolutely not,” and “Very bad idea.”

The bigger picture: Wikipedia’s experience reflects broader industry struggles with AI-generated content accuracy across major platforms.

  • Apple was forced to pause its AI notification summaries in January after spreading fake news, while Bloomberg corrected over three dozen AI summaries this year due to inaccuracy issues.
  • The incident highlights the tension between making content more accessible and maintaining editorial standards that Wikipedia has built over decades.

What’s next: Despite pausing the current trial, Wikipedia hasn’t abandoned AI summaries entirely.

  • The platform will continue testing features to make content accessible for different reading levels, but human editors will remain central to determining what information appears on Wikipedia.
  • A Wikimedia Foundation spokesperson confirmed that editorial oversight will continue to be pivotal in future AI integration efforts.
Wikipedia Pauses AI Summaries After Pushback From Editors

Recent News

Ecolab CDO transforms century-old company with AI-powered revenue solutions

From dish machine diagnostics to pathogen detection, digital tools now generate subscription-based revenue streams.

Ecolab CDO builds AI-powered subscription revenue streams

Moving beyond chemical sales to value-based, usage-driven subscription models.

Google Maps uses AI to reduce European car dependency with 4 major updates

Smart routing now suggests walking or transit when they'll beat driving through traffic.