A recent incident has highlighted a concerning trend: popular AI chatbots are unwittingly sharing personal phone numbers, causing distress to users. In March, an Israeli software developer received messages on WhatsApp from Gemini, Google’s generative AI, which had mistakenly included his contact details in customer service instructions.
Similarly, a PhD candidate at the University of Washington found her colleague's personal cell number after experimenting with Gemini. These cases add to the growing list of privacy issues linked to large language models (LLMs).
The surge in AI-related privacy requests is alarming. A company called DeleteMe reported a 400% increase in such inquiries over the past seven months, with ChatGPT being the most referenced tool among users.
Experts attribute these leaks to the use of personally identifiable information (PII) during model training. However, the exact mechanisms remain unclear. This issue highlights the need for better oversight and stricter guardrails in AI development.







