top of page

AI Assistants: Empowering Allies Or Risky Confidants?


Vinay Goel is CEO of Wald.ai. He was the Chief Digital Officer at JLL and led products at companies like Google, CheckPoint and Webroot.


By Vinay Goel, Forbes Councils Member.

Feb 28, 2025, 07:15am EST


Generative AI tools like ChatGPT are revolutionizing how we work, offering assistance with tasks ranging from writing emails to generating code. However, the allure of these powerful tools should be accompanied by a fair amount of caution.


The primary concern revolves around the potential for employees to inadvertently leak sensitive company information when interacting with these AI models.


Unintentional Data Leaks

When employees use an AI chatbot to assist with tasks like writing reports, summarizing documents, generating code or brainstorming ideas, they may inadvertently include sensitive information in their prompts. This could include:


• Trade Secrets: Proprietary formulas, marketing strategies, upcoming product launches or research and development plans.


• Customer Data: Personally identifiable information (PII) such as names, addresses, phone numbers, financial details or purchase history.


• Internal Documents: Confidential reports, legal documents, internal memos and sensitive communication records.


• Intellectual Property: Code, designs and other creative works protected by copyright or patents.


LLMs like ChatGPT operate as “black boxes,” meaning the internal workings of how they process and utilize information are often opaque. This lack of transparency raises concerns about how user inputs, including potentially sensitive data, are used to train and improve the model itself. All of this means that user identities and their data are vulnerable. The Samsung data leak via ChatGPT is a prime example.


Mitigating The Risks

To address these concerns, businesses should implement robust data security measures:


• Private AI Instances: Deploying private instances of LLMs within a secure company network provides greater control over data. This minimizes the risk of data exposure to third-party providers but requires significant investment in infrastructure and ongoing maintenance.

ead More

• Contextual Data Redaction: Some apps specialize in identifying and redacting sensitive information within user prompts before they are submitted to LLMs. This ensures that crucial data remains protected while still enabling employees to leverage the power of AI. These solutions utilize advanced techniques like natural language processing (NLP) and AI model finetuning to analyze prompts and identify potential risks, such as the presence of personally identifiable information and a broad spectrum of sensitive company data, without the need for pre-defining sensitive data types or keywords.


• Traditional DLP Solutions: Existing data loss prevention (DLP) solutions can be adapted to monitor employee interactions with LLMs, flagging suspicious activity and preventing the leakage of pre-defined data types for sensitive information.


• End-To-End Encryption: Implementing end-to-end encryption for all user interactions with LLMs provides an additional layer of security. This ensures that user prompts and LLM responses are encrypted throughout the entire communication process, minimizing the risk of data interception and unauthorized access.


• Employee Training And Awareness: Educating employees about the risks associated with using LLMs with sensitive data is crucial. Training programs should emphasize best practices for interacting with AI tools, including:

  • Minimizing the inclusion of sensitive data in prompts.

  • Reviewing and redacting any sensitive information before submitting prompts.

  • Being aware of the potential for unintentional data leakage.


• Regular Security Audits: Conducting regular security audits and penetration testing can help identify and address potential vulnerabilities in your organization’s AI security posture.


The Future Of Work And AI

Generative AI tools like ChatGPT offer tremendous potential to revolutionize the workplace, but they also present significant challenges to data security. By implementing a multilayered approach that combines technological solutions, employee training and robust security policies, organizations can harness the power of AI while mitigating the risks and ensuring the protection of their most valuable assets.


COUNCIL POST | Membership (fee-based)

Vinay Goel is CEO of Wald.ai. He was the Chief Digital Officer at JLL and led products at companies like Google, CheckPoint and Webroot. Read Vinay Goel's full executive profile here.

bottom of page