×
Biosecurity concerns mount as AI outperforms virus experts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI models now outperform PhD-level virologists in wet lab problem-solving, according to a groundbreaking new study shared exclusively with TIME. This development represents a significant double-edged sword for science and security: while these systems could accelerate medical breakthroughs and pandemic preparedness, they also potentially democratize bioweapon creation by providing expert-level guidance to individuals with malicious intent, regardless of their scientific background.

The big picture: AI models significantly outperformed human experts in a rigorous virology problem-solving test designed to measure practical lab troubleshooting abilities.

  • OpenAI‘s o3 model achieved 43.8% accuracy while Google’s Gemini 2.5 Pro scored 37.6% on the test, compared to human PhD-level virologists who averaged just 22.1% in their declared areas of expertise.
  • This marks a concerning milestone as non-experts now have unprecedented access to AI systems that can provide step-by-step guidance for complex virology procedures.

Why this matters: For the first time in history, virtually anyone has access to non-judgmental AI virology expertise that could potentially guide them through creating bioweapons.

  • The technology could accelerate legitimate medical and vaccine development while simultaneously increasing bioterrorism risks.

The researchers’ approach: The study was conducted by a multidisciplinary team from the Center for AI Safety, MIT’s Media Lab, Brazilian university UFABC, and pandemic prevention nonprofit SecureBio.

  • The researchers consulted virologists to create an extremely difficult practical assessment that measured the ability to troubleshoot complex laboratory protocols.
  • The test focused on real-world virology knowledge rather than theoretical understanding.

Voices of concern: Seth Donoughe, a research scientist at SecureBio and study co-author, expressed alarm about the dual-use implications of these AI capabilities.

  • Experts like Dan Hendrycks and Tom Inglesby are urging AI companies to implement robust safeguards before these models become widely available.

Proposed safeguards: Security experts recommend multiple measures to mitigate potential misuse while preserving beneficial applications.

  • Suggested protections include gated access to advanced models, input and output filtering systems, and more rigorous testing before new models are released.
  • The challenge lies in balancing scientific advancement with responsible AI development in sensitive domains.
Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

Recent News

Meta pursued Perplexity acquisition before $14.3B Scale AI deal

Meta's AI talent hunt includes $100 million signing bonuses to lure OpenAI employees.

7 essential strategies for safe AI implementation in construction

Without a defensible trail, AI-assisted decisions become nearly impossible to justify in court.