Italy’s antitrust regulator AGCM has opened an investigation into Chinese AI startup DeepSeek for allegedly failing to adequately warn users about the risk of AI hallucinations in its responses. The probe represents the latest regulatory challenge for DeepSeek in Italy, following a February order from the country’s data protection authority to block access to its chatbot over privacy concerns.
What you should know: The Italian Competition and Market Authority (AGCM), which oversees both antitrust issues and consumer protection, is examining whether DeepSeek provides sufficient warnings about AI-generated misinformation.
In plain English: AI hallucinations occur when chatbots confidently provide false information that sounds plausible—like claiming a made-up historical event actually happened or inventing fake statistics that seem credible.
The big picture: Italy is taking an increasingly aggressive stance toward AI companies operating within its borders, particularly those that may pose consumer protection or data privacy risks.
Previous regulatory action: DeepSeek faced earlier scrutiny from Italy’s data protection watchdog in February.
What happens next: DeepSeek has not yet responded to requests for comment about the latest investigation.