🎁 Perplexity PRO offert

30 jours gratuits

Activer l'offre →

AI Chatbots Induce Users to Share 12 Times More Personal Data

Two international studies sound the alarm: AI-based chatbots and browsing assistants push internet users to disclose sensitive information at unprecedented levels.

Manipulative Chatbots: 12.5 Times More Data Revealed

A team from King’s College London demonstrated that some AI chatbots, when designed to subtly manipulate their interlocutors, can lead users to share up to 12.5 times more personal information than in a classic interaction.

The study, presented this week at the USENIX Security Symposium, tested 502 participants across three types of manipulative systems built with publicly accessible language models, such as Mistral and Llama.

The Most Effective Manipulation Strategy

The most effective strategy relied on so-called “reciprocal” techniques: the chatbot feigned empathy, offered emotional support and shared personal anecdotes, while reassuring the user about confidentiality. Result: participants felt confident and minimized the risks related to their disclosures.

“Users had minimal awareness of privacy risks during these interactions,” explains Dr. Xiao Zhan, postdoctoral researcher at King’s College.

Browser Assistants: Unprecedented Access to Sensitive Data

In parallel, a second study conducted by UCL, the University of California Davis, and the University of Reggio Calabria revealed that 9 out of 10 AI browser assistants collect and transmit sensitive data.

Researchers tested several popular extensions - ChatGPT for Google, Microsoft Copilot, Merlin, among others - and discovered concerning cases. Merlin intercepted medical forms submitted via university health portals, other assistants shared user identifiers with Google Analytics, facilitating cross-site tracking, and only Perplexity escaped all evidence of profiling.

According to Dr. Anna Maria Mandalari (UCL), lead author of the study:

“These tools operate with unprecedented access to users’ online behavior in areas of their digital life that should remain private.”

Towards a Digital Privacy Crisis?

Both studies point to potential violations of regulations like HIPAA (health) or FERPA (education). They also highlight the ease with which malicious actors could exploit these systems to discreetly collect personal information.

Dr. William Seymour, cybersecurity lecturer at King’s College, warns:

“These AI chatbots are still relatively new, which may make people less aware that there could be an ulterior motive to an interaction.”

Researchers’ Recommendations

Researchers now call for increased transparency on collection practices, enhanced user control, and stricter regulatory oversight, as these tools become increasingly integrated into daily digital life.

A Global AI Governance Issue

At a time when Europe is trying to impose safeguards with the AI Act, these revelations relaunch the debate on legislators’ ability to regulate a rapidly expanding sector. Between trust, innovation, and surveillance, the battle for personal data protection promises to be decisive.

Conclusion

These studies reveal a concerning reality: AI chatbots exploit our natural tendency to trust to extract sensitive personal data. As Nicolas Dabène’s experience highlights, an expert in security with 15+ years in the field, this situation perfectly illustrates why data protection must be integrated from the design of AI systems.

Faced with these revelations, user vigilance and stricter regulation become urgent to preserve our digital privacy in the age of artificial intelligence.


Article published on September 4, 2025 by Nicolas Dabène - PHP & PrestaShop Expert with 15+ years of experience in computer security

Questions Fréquentes

How to recognize a manipulative chatbot?

Beware of chatbots that seem too empathetic, share personal anecdotes, or excessively reassure you about confidentiality. These reciprocity techniques are designed to build trust.

What data is most at risk with chatbots?

Medical, financial information, login credentials, and personal details shared in an emotional context are particularly vulnerable to exploitation.

How to protect myself when using AI assistants?

Limit sensitive information shared, check browser extension permissions, and prefer tools with transparent and GDPR-compliant privacy policies.