With the rise of hybrid work, data leakage has become a significant issue. Employees are now working from a variety of locations, including their homes, coffee shops, and even public libraries. This makes it more difficult to keep track of data moving between managed endpoints and your organization's SaaS applications or private apps.

Shadow IT, the use of unauthorized SaaS and cloud services by employees has always been a challenge for IT departments. If left alone, shadow IT can pose a significant security risk, as it can expose your organization's data to unauthorized access.

Generative AI platforms like ChatGPT are a form of SaaS or cloud app and have emerged as a new frontier of shadow IT. These platforms allow users to generate text and images, troubleshoot software bugs, and create content that is often indistinguishable from human-created content. This makes them a popular tool for employees who want to share sensitive information without fear of detection.

The problem is that many platforms use the data submitted by end users to train their model, meaning anything proprietary becomes publicly available once it has been ingested. Samsung, for example, had to ban ChatGPT after three separate instances of employees unintentionally sharing sensitive data to the generative AI platform, including confidential source code.

The good news is that generative AI platforms are not all that different from other internet destinations that you need to protect your data from. With the right tools, you can block access or block user actions like uploading and submitting any sensitive data to these platforms and prevent data leakage.

To reduce the risk of data exfiltration and stop any form of shadow IT, you need a modern secure web gateway (SWG) solution with native data loss prevention (DLP) functionality. A SWG monitors all internet traffic and blocks access to unauthorized websites and applications. However, a modern SWG that has DLP can also be used to scan files for sensitive data and prevent them from being uploaded or downloaded to any public websites and unauthorized cloud apps.

In a hybrid work environment, you must take steps to protect your data while ensuring productivity continues. A SWG with DLP functionality can help you do both.

How data is leaked to ChatGPT and other generative AI

There are several ways that data can be leaked to ChatGPT and other generative AI platforms. Some of the most common scenarios include:

Pasting sensitive data on AI apps for formatting or grammar checks.

This is a common practice, but it can be risky if the sensitive data is not properly anonymized.

Developers pasting the source code into AI apps to improve performance and efficiency.

The source code is proprietary to any enterprise and contains sensitive information about the company's products or services.

Including AI apps in sensitive company meetings to transcribe

This can be a convenient way to create a transcript of a meeting, but it can also be risky if the call contains sensitive information. 

Accidental uploading of sensitive or compliant data

Individuals may unknowingly input confidential data into the chat interface under the false impression that it provides a secure means of communication. This information can include personal identifying information (PII), software code, financial information, personal health information (PHI), or any other confidential or compliant data.

How Lookout prevents data leaks to ChatGPT and other generative AI

Lookout Secure Internet Access is a data-centric SWG built on the principles of zero trust that protects users, underlying networks, and corporate data from Internet threats like malware, zero-day threats, and browser-based attacks. With native DLP capabilities, Secure Internet Access inspects outbound traffic for sensitive data and applies data loss prevention policies to prevent data leakage to the public internet. This enables your organization to restrict generative AI platforms and other unauthorized websites and apps with granularity.

Different ways to protect your data

Lookout uses a variety of techniques to prevent data leaks, including:

  • URL filtering: Lookout can block access to ChatGPT and other generative AI tools altogether. This can help to prevent employees from accidentally or intentionally uploading sensitive data to these tools.
  • Content filtering: Lookout can scan the content of posts and file uploads for sensitive data. If sensitive data is detected, Lookout can take action to prevent the data from being uploaded, such as masking the data or requiring the user to provide additional authentication.
  • Role-based access control: Lookout allows you to create custom policies that define how users can interact with ChatGPT and other generative AI tools. For example, you can create a policy that allows only a selected group of users to access the platform but prevents them from submitting sensitive data.

Customize policies that best suit your needs

We ensure that you can quickly write policies that protect data while leaving room to customize and refine for more specific circumstances. Here are some of the ways you can write policies with Lookout Secure Internet Access whether the data is structured, as in posts, or unstructured in the form of file uploads:

  • Use a predefined AI category covering a wide range of AI-driven websites, which can be used by enterprise admins to write quick access policies
  • Define custom categories, which enable your organization to categorize websites of interest and write policies around them.
  • Encryption or masking and redaction of data with DRM
  • User communication before sensitive data is uploaded to communicate and prevent risky actions from being taken.

How Lookout applies data loss prevention (DLP) to all data 

The Lookouts Cloud Security Platform offers centralized DLP policy management and enforcement across every platform and app. Using advanced DLP, Lookout can identify, assess, and protect sensitive data in every format and app using exact data matching and fingerprinting. Once sensitive data has been identified, Lookout provides adaptive data protection policies that go beyond the basic allow-or-deny capabilities offered by other DLP solutions. 

Lookout’s DLP extends data classification and governance to any document in the cloud, integrating with Microsoft Azure’s Information Protection (AIP), Titus classifications, and cloud-native labels. It can also apply standard compliance policies that cover regulations like GDPR, SOX, PCI, HIPAA, and many others. 

With its comprehensive data protection features, you can trust Lookout to protect your sensitive corporate data, even in the face of new challenges like generative AI.

Mitigating the Risks of GenAI: Secure Your Data & Empower Your Workforce

Watch our recent on-demand webinar, where we’ll discuss best practices on your journey to enable your workforce to use GenAI tools safely, and without risk to your organization.

Book a personalized, no-pressure demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations
  • How an integrated endpoint-to-cloud security platform can detect threats and protect your organization

Book a personalized, no-pressure demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations
  • How an integrated endpoint-to-cloud security platform can detect threats and protect your organization
Collaboration

Book a personalized, no-pressure demo today to learn:

Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization.

Women in modern office smiling and looking at a coworker

Mitigating the Risks of GenAI: Secure Your Data & Empower Your Workforce

Watch our recent on-demand webinar, where we’ll discuss best practices on your journey to enable your workforce to use GenAI tools safely, and without risk to your organization.