×
Hugging Face cofounder poopoos AI science pep, says AI can’t ask the right questions
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Hugging Face cofounder Thomas Wolf is challenging the tech industry’s optimistic predictions about AI’s potential to revolutionize scientific discovery. Speaking at VivaTech in Paris, Wolf argued that current large language models excel at finding answers but lack the creativity to ask original scientific questions—a critical limitation that may produce digital “yes-men” rather than breakthrough discoveries.

What they’re saying: Wolf believes the fundamental challenge lies in AI’s inability to challenge existing frameworks of knowledge.

  • “In science, asking the question is the hard part, it’s not finding the answer,” Wolf told Fortune. “Once the question is asked, often the answer is quite obvious, but the tough part is really asking the question, and models are very bad at asking great questions.”
  • “Models are just trying to predict the most likely thing,” he explained. “But in almost all big cases of discovery or art, it’s not really the most likely art piece you want to see, but it’s the most interesting one.”

The big picture: Wolf’s skepticism stems from his analysis of Anthropic CEO Dario Amodei’s widely circulated blog post “Machines of Loving Grace,” which predicted AI would compress decades of scientific progress into just a few years.

  • Initially inspired by Amodei’s vision of AI solving cancer and mental health problems, Wolf grew doubtful after re-reading the piece.
  • “It was saying AI is going to solve cancer, and it’s going to solve mental health problems—it’s going to even bring peace into the world. But then I read it again and realized there’s something that sounds very wrong about it, and I don’t believe that,” he said.

Why this matters: The debate highlights a fundamental question about AI’s limitations in creative and scientific thinking, with implications for how the technology industry allocates resources and sets expectations.

  • Wolf argues that what we currently have are models that behave like “yes-men on servers”—endlessly agreeable but unlikely to challenge assumptions or rethink foundational ideas.
  • His perspective contrasts sharply with leading AI labs’ ambitious claims about achieving artificial general intelligence and scientific breakthroughs.

Key analogy: Wolf uses the game of Go to illustrate his point about the difference between following rules and creating new frameworks.

  • While DeepMind’s AlphaGo impressively mastered Go’s rules to defeat world champions in 2016, Wolf argues the bigger challenge was inventing such a complex game in the first place.
  • In science, he suggests, the equivalent breakthrough would be asking truly original questions rather than simply processing existing knowledge.

What he’s proposing: In his earlier blog post “The Einstein AI Model,” Wolf outlined his vision for what truly transformative AI would require.

  • “To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask,” he wrote.
AI systems are unlikely to make the scientific discoveries some leading labs are hoping for, Hugging Face’s top scientist says

Recent News

Authors sue Microsoft over 200K pirated books used for AI training

Plaintiffs seek up to $150,000 per book in what could reshape AI training laws.

United uses AI to help passengers navigate tight airport layovers

AI guides travelers through airports with turn-by-turn directions while holding connecting flights.