Italy's antitrust watchdog AGCM has launched a formal investigation into Chinese artificial intelligence startup DeepSeek for allegedly failing to adequately warn users about the risk of "hallucinations" - situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs, as reported by Reuters.
The Italian Data Protection Authority (Garante) imposed an emergency ban on DeepSeek AI after finding multiple GDPR violations. Despite DeepSeek's claim that "European legislation does not apply to us," the authority discovered the company was collecting data from Italian users while failing to meet basic compliance requirements.12 The investigation revealed several specific breaches, including:
Failure to provide a privacy policy in Italian, violating transparency principles1
Insufficient information about data processing activities and legal basis1
Storage of personal data on servers in China without proper safeguards for international data transfers3
No appointed EU representative as required for non-EU companies processing EU citizens' data13
Refusal to cooperate with supervisory authorities, breaching Article 31 of GDPR1
The violations could result in fines up to €20 million or 4% of global annual turnover, with potential criminal consequences for non-compliance with the Garante's decision.3 This case highlights the growing challenge of enforcing GDPR on foreign AI companies that attempt to operate in Europe while claiming immunity from its jurisdiction.2
DeepSeek's AI models have been found to exhibit alarming hallucination rates, with research showing the DeepSeek R1 model has a hallucination rate of 14.3%, nearly four times higher than its predecessor DeepSeek V3 (3.9%)1. These hallucinations—instances where the AI fabricates false information while presenting it as factual—pose serious risks including misinformation distribution, faulty business decisions, intellectual property exposure, and regulatory compliance violations2. Security testing has revealed even more concerning statistics, with AppSOC reporting that DeepSeek produced hallucinated information 81% of the time during their evaluations3.
Multiple security assessments have exposed critical safety flaws in DeepSeek's models. Cisco researchers discovered that DeepSeek R1 exhibited a 100% failure rate when tested against harmful prompts from the HarmBench dataset, showing no resistance to algorithmic jailbreaking attempts4. The model has also demonstrated bizarre self-identification issues, incorrectly claiming to be "Claude, created by Anthropic" or stating "My guidelines are set by OpenAI"—both entirely false assertions that highlight fundamental reliability problems1. These vulnerabilities appear to stem from DeepSeek's cost-efficient training methods, which may have compromised essential safety mechanisms in pursuit of rapid development and lower costs41.
DeepSeek's storage of user data on Chinese servers presents significant privacy and security concerns for EU users. The company appears to collect extensive personal information—including chat histories, input prompts, device metadata, IP addresses, and behavioral analytics1—while storing this data in China without implementing the safeguards required by GDPR for international data transfers23. This practice is particularly problematic as China is not considered to provide adequate data protection under EU standards, and Chinese national security laws could potentially compel DeepSeek to hand over EU user data to government authorities21.
Multiple European data protection authorities have launched investigations into these practices. Italy's Garante ordered DeepSeek to block Italian access to its R1 app after the company failed to address concerns about its data storage practices4. Belgium, the Netherlands, France, and Ireland have also initiated information requests to determine whether DeepSeek's data collection breaches GDPR by transferring personal data to China5. DeepSeek's privacy policy notably lacks any mention of Standard Contractual Clauses or other legal mechanisms required to facilitate lawful data transfers from the EU to China, raising serious questions about the company's commitment to European data protection standards23.