Beware What You Say to AI Chatbots

Generative AI chatbots like ChatGPT, Microsoft’s Bing/CoPilot, and Google’s Gemini are the vanguard of a significant advance in computing. But worries about privacy and confidentiality are starting to arise around conversations held with these chatbots.

Increasingly, people are asking AI chatbots to analyze or summarize information, pasting in the contents of an entire file. Services like ChatPDF and features in Adobe Acrobat even let you ask questions about PDFs you provide—admittedly a good way to extract content from a lengthy document.

Though potentially useful from a productivity standpoint, such situations provide troubling opportunities to reveal personally sensitive data or confidential corporate information. For instance, Samsung engineers inadvertently leaked confidential information while using ChatGPT to fix errors in their code.

The most significant concern in using AI chatbots is that sensitive personal and business information might be used to train future versions of the large language models used by the chatbots. That information could then be reiterated to other users in unpredictable contexts—a valid concern since early large language models were trained on text that was publicly accessible online, but without the knowledge or permission of the authors.

While the privacy policies for the best-known AI chatbots include language about how uploaded data will not be used to train future versions, there is unfortunately no guarantee that companies will adhere to those policies. And even if they intend to honor their promises, there is room for error, as conversation history could accidentally be added to a training model.

Since chatbots store conversation history, anything added to a conversation is in an uncontrolled environment where at least the chatbot services’ employees could see it, and it could be shared with other partners. Plus, if attackers compromise the service and steal data, the information could be vulnerable.

And since many companies operate under master services agreements, specifying how client data must be handled, companies should avoid using any client product details in AI-based platforms; an accidental reveal of those details could subject the company to contract violations and legal and financial penalties.

So, our advice is to avoid sharing sensitive information with chatbots. It may feel like you are having a private conversation but use the guideline of not sharing anything you wouldn’t tell a stranger. And if you need help navigating chatbots or other AI-powered technology, please don’t hesitate to send us a message today.

(Featured image by iStock.com/Ilya Lukichev)