Google warns employees not to enter sensitive data into AI chatbots


Google’s parent company, Alphabet, is warning employees not to share personal or business information with AI chatbots. This includes the company’s own chatbot, Bard.

This leak could not have come at a worse time: As the EU is in the midst of talks with Google before Bard is allowed to launch in member states, with privacy on the agenda, a report has surfaced about parent company Alphabet’s warnings to employees about AI chatbots.

People familiar with the matter revealed this to Reuters, the news site reports. According to the report, employees are forbidden from providing confidential material to the chatbot, which the company later confirmed. In a related development, Google has also added a note to its privacy policy asking users not to feed Bard with sensitive or confidential information.

Google aims to be “transparent about the limitations of its technology”

Alphabet is also telling employees that code generated by Bard should not be used in production (Bard can do this after its latest update), according to Reuters sources. In a statement, Alphabet said that while Bard can make unwanted code suggestions, it can still help programmers. According to Reuters, Google “wanted to be transparent about the limitations of its technology.”


Like OpenAI with ChatGPT, Google uses the data users enter into Bard to further train its AI models. Only after political pressure did OpenAI introduce a way to opt out, but at the cost of convenience, as past chats are immediately deleted.

Google isn’t the only company to issue such warnings about chatbots; Samsung recently banned the use of ChatGPT and Bard after discovering that employees had been entering sensitive lines of code into the chatbot.

Those who don’t comply with the ban could be fired, according to an internal memo. Companies are concerned that data about the chatbot could be leaked or that third parties could gain insight from OpenAI and its partners, such as when preparing data for AI training.

The fact that even Alphabet is now warning its employees about chatbots, including its own, shows that even the creators of the new tools are unsure how trustworthy these systems are.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top