×
Italy targets DeepSeek in 2nd regulatory probe over AI hallucination warnings
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Italy’s antitrust regulator AGCM has opened an investigation into Chinese AI startup DeepSeek for allegedly failing to adequately warn users about the risk of AI hallucinations in its responses. The probe represents the latest regulatory challenge for DeepSeek in Italy, following a February order from the country’s data protection authority to block access to its chatbot over privacy concerns.

What you should know: The Italian Competition and Market Authority (AGCM), which oversees both antitrust issues and consumer protection, is examining whether DeepSeek provides sufficient warnings about AI-generated misinformation.

  • The regulator claims DeepSeek did not give users “sufficiently clear, immediate and intelligible” warnings about potential hallucinations in AI-produced content.
  • AGCM defines AI hallucinations as “situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information.”

In plain English: AI hallucinations occur when chatbots confidently provide false information that sounds plausible—like claiming a made-up historical event actually happened or inventing fake statistics that seem credible.

The big picture: Italy is taking an increasingly aggressive stance toward AI companies operating within its borders, particularly those that may pose consumer protection or data privacy risks.

  • This marks the second major regulatory action against DeepSeek in Italy within four months.
  • The dual investigations signal Italian authorities’ broader concerns about AI safety and transparency in consumer-facing applications.

Previous regulatory action: DeepSeek faced earlier scrutiny from Italy’s data protection watchdog in February.

  • The data protection authority ordered DeepSeek to block access to its chatbot after the company failed to address privacy policy concerns.
  • The privacy investigation preceded the current consumer protection probe by several months.

What happens next: DeepSeek has not yet responded to requests for comment about the latest investigation.

  • The AGCM probe could result in fines or requirements for DeepSeek to modify its user interface and warning systems.
  • The outcome may influence how other AI companies operating in Italy approach user disclosure requirements.
Italy regulator opens probe into China's DeepSeek

Recent News

Ecolab CDO transforms century-old company with AI-powered revenue solutions

From dish machine diagnostics to pathogen detection, digital tools now generate subscription-based revenue streams.

Google Maps uses AI to reduce European car dependency with 4 major updates

Smart routing now suggests walking or transit when they'll beat driving through traffic.

Am I hearing this right? AI system detects Parkinson’s disease from…ear wax, with 94% accuracy

The robotic nose identifies four telltale compounds that create Parkinson's characteristic musky scent.