Back to Knowledge Center
TechnologyMarch 30, 2026Kadim Karakuş

Protecting Sensitive Data in Copilot with Microsoft Purview DLP: Web Search Restrictions and Real-Time Prompt Evaluation

While Microsoft 365 Copilot boosts productivity, sensitive data in user prompts carries a risk of leaking through external web searches. Microsoft Purview DLP's new real-time prompt evaluation feature detects sensitive information types (SITs) in prompts and prevents Copilot from using that data in external web searches. This guide covers the two protection models encompassing SIT-based prompt restriction and sensitivity label-based file protection, DSPM for AI monitoring capabilities, and a step-by-step policy creation process.

Protecting Sensitive Data in Copilot with Microsoft Purview DLP: Web Search Restrictions and Real-Time Prompt Evaluation

The Risk of Data Leakage in the AI Era

Microsoft 365 Copilot is a powerful AI assistant that allows users to ask questions in natural language and receive responses based on enterprise data. When generating responses, Copilot uses not only internal Microsoft Graph data but also external web searches when needed to deliver more comprehensive results. In this web search mechanism, user prompts are not sent directly to external services; Copilot generates derived search queries with user and tenant identifiers stripped.

However, despite this protection layer, the inclusion of prompts containing sensitive information in the search query generation process carries a potential data leakage risk. When a user enters a prompt containing a credit card number, passport information, social security number, or bank account details, the processing of this sensitive data during the search query generation process may violate enterprise security policies.

This risk creates a critical concern particularly in heavily regulated sectors such as financial services, healthcare, and government. Regulations including GDPR, PCI DSS, and HIPAA prohibit the processing of sensitive data through unauthorized channels. Therefore, managing AI tools within the scope of data protection policies has become a necessity.

Microsoft Purview DLP addresses this exact need with a new feature: real-time evaluation of Copilot prompts for sensitive information types (SITs) and restriction of external web searches when sensitive data is detected. For detailed information about the general DLP architecture and fundamental concepts, see our Microsoft Purview DLP Guide.

New Feature: Real-Time DLP Evaluation for Copilot Prompts

Microsoft Purview DLP now performs real-time prompt scanning in Microsoft 365 Copilot and Copilot Chat interactions. This feature instantly evaluates the text of prompts submitted by users for Sensitive Information Types (SITs).

How It Works

When a DLP policy is configured for the Microsoft 365 Copilot and Copilot Chat location with the "Content contains sensitive information types" condition, the system follows this flow:

  • The user submits a prompt to Copilot.
  • The DLP engine scans the prompt text in real time for defined SITs (credit card numbers, passport information, social security numbers, etc.).
  • When sensitive information is detected, Copilot does not process the prompt and displays a notification to the user indicating that the organization has blocked the use of such sensitive information.
  • Copilot does not use the detected sensitive data for either internal or external web searches.
  • When no sensitive information is detected, Copilot responds normally and uses internal Microsoft Graph sources and external web searches if licensing permits.

Supported Scope

This protection covers the following Microsoft 365 applications:

  • Microsoft 365 Copilot: The Copilot experience across all Microsoft 365 applications
  • Copilot Chat: The standalone Copilot chat interface
  • Copilot in Word, Excel, PowerPoint: Copilot skills within these applications
  • Prebuilt agents: Predefined agents running within Microsoft 365 Copilot and Copilot Chat

SIT Support

Both built-in SITs provided by Microsoft and custom SITs created by organizations for their specific needs are supported. However, SITs created through document fingerprinting cannot be used with Copilot DLP policies.

Two Protection Models: SIT-Based and Sensitivity Label-Based

Microsoft Purview DLP offers two complementary models for protecting Copilot interactions. Both models can be configured as separate rules within the same DLP policy but cannot be used together within the same rule.

Model 1: SIT-Based Prompt Restriction

This model scans the text that users type directly into Copilot prompts. When sensitive information types are detected, Copilot is completely prevented from responding and performing web searches.

  • Status: Preview
  • Condition: Content contains > Sensitive information types
  • Action: Restrict Copilot from processing content > Processing prompts
  • Effect: Copilot does not respond to the prompt; sensitive data is not used in internal or external searches
  • Scope: Prompt text (directly typed text)

Model 2: Sensitivity Label-Based File and Email Restriction

This model prevents files and emails with assigned sensitivity labels from being used in Copilot's response summarization. Files may continue to appear in citations, but their content is not processed.

  • Status: Generally Available (GA)
  • Condition: Content contains > Sensitivity labels
  • Action: Prevent Copilot from processing content
  • Effect: Labeled file/email content is not used in Copilot responses; may appear in citations
  • Scope: Files in SharePoint Online and OneDrive for Business; emails sent after January 1, 2025

Comparison

FeatureSIT-Based (Preview)Sensitivity Label-Based (GA)
Protection targetPrompt textFile and email content
Trigger conditionSensitive info types in promptSensitivity labels on files/emails
Web search impactExternal and internal search blockedFile content excluded from summarization
Copilot responseCompletely blockedResponse given excluding labeled source
ScopeAll Copilot experiencesSharePoint, OneDrive, Exchange

Evaluating both protection models together before Copilot deployment ensures the most comprehensive approach to data security. For detailed information about pre-Copilot security preparations, see our Pre-Copilot Data Security Checklist.

Monitoring and Visibility: DSPM for AI and Activity Explorer

Effective management of DLP policies extends beyond rule definition. Monitoring, analyzing, and reporting on triggered events is equally important. Microsoft Purview provides multiple monitoring layers for Copilot DLP events.

DLP Alerts

When a SIT-based or sensitivity label-based DLP rule is triggered, an alert is automatically created in the DLP Alerts panel within the Purview compliance portal. These alerts show in detail which user triggered a rule violation with which SIT or label, the trigger time, and the affected Copilot experience. Administrators can review these alerts by priority and take necessary actions.

Activity Explorer

Activity Explorer is a centralized monitoring tool that enables historical tracking of Copilot-related actions within the DLP and DSPM for AI scope. Fed by Microsoft 365 unified audit logs, Activity Explorer offers up to 30 days of data retention capacity and enables detailed analysis with approximately 50 different filters.

Actions that can be monitored in Activity Explorer related to Copilot include:

  • Prompt attempts containing sensitive information types
  • Attempts to process sensitivity-labeled files by Copilot
  • Copilot interactions blocked by DLP rules
  • Per-user DLP trigger frequency and patterns

DSPM for AI (Data Security Posture Management for AI)

DSPM for AI is a comprehensive solution that assesses and provides improvement recommendations for an organization's overall security posture in AI interactions. Covering interactions with Microsoft 365 Copilot, enterprise AI applications, and third-party AI tools, DSPM for AI offers the following capabilities:

  • Discovery: Detection of risky interactions and sensitive information usage
  • Protection: Automatic protection through DLP policies and sensitivity labels
  • Governance: Audit capabilities through data lifecycle management, communication compliance, and eDiscovery

DSPM for AI provides guided workflows that transition from reactive visibility to proactive, outcome-driven insights. This approach enables security teams to detect and resolve AI-related risks more quickly.

Policy Creation: Step-by-Step Implementation

Follow the steps below to create a new DLP policy that provides sensitive information protection in Copilot prompts.

Step 1: Create DLP Policy

Navigate to Data loss prevention > Policies in the Microsoft Purview compliance portal and create a new policy. Copilot DLP policies can only be created using the Custom policy template.

Step 2: Select Location

Select Microsoft 365 Copilot and Copilot Chat as the policy location. When this location is selected, all other locations (Exchange, SharePoint, OneDrive, etc.) are automatically disabled. If you want to combine Copilot protections with other locations, you need to create separate policies.

Step 3: Configure Rules

When creating the DLP rule, select one of two conditions:

  • Sensitive information types (SIT) condition: Select the SITs to be detected in prompt text. You can define built-in SITs such as credit card numbers, social security numbers, IBANs, and passport numbers, or custom corporate SITs.
  • Sensitivity labels condition: Select the sensitivity labels for content you do not want Copilot to process.

Both conditions cannot be used together within the same rule; however, separate rules for each condition can be created within the same policy.

Step 4: Define Actions

Select Restrict Copilot from processing content as the action. For SIT-based rules, the Processing prompts sub-option is also available.

Step 5: Test and Deploy

First run the policy in simulation mode to test without affecting the actual user experience. Review simulation results in DLP reports, evaluate false positive rates, and make necessary adjustments. After deploying the policy, updates may take up to 4 hours to reflect in the Copilot experience.

Required Roles and Permissions

You need one of the following roles to create or edit a DLP policy:

  • Entra AI Admin: Managing all aspects of Microsoft 365 Copilot and AI services
  • Purview Data Security AI Admin: Editing DLP policies related to Copilot and viewing DSPM for AI content
  • Purview Compliance Administrator: General compliance management
  • Purview Information Protection Admin: Managing information protection policies

Microsoft recommends using task-specific roles rather than Global Administrator in accordance with the least privilege principle.

Practical Scenarios

Scenario 1: Protecting Financial Data

A financial institution encourages employees to use Microsoft 365 Copilot in daily business processes. However, they want to prevent credit card numbers, IBANs, and EU debit card information from being used in Copilot prompts.

In this scenario, the organization creates a DLP policy for the Microsoft 365 Copilot and Copilot Chat location and selects credit card number, IBAN, and EU debit card number SITs in the "Content contains sensitive information types" condition. When a user enters a prompt containing this information, Copilot does not respond and does not use the sensitive data in web searches.

Scenario 2: GDPR/HIPAA-Compliant Healthcare Data Protection

A healthcare organization wants to prevent patient data from leaking to external sources through Copilot. Custom SITs covering social security numbers, health record numbers, and personal health information are defined and added to the DLP policy.

Additionally, a second rule is created in the same policy to prevent files and emails with the "Confidential – Patient Data" sensitivity label from being used in Copilot summarization. This dual-layer approach provides protection at both the prompt level and the file level.

Scenario 3: Enforcing Corporate Confidentiality Labels

A technology company has defined "Highly Confidential" and "Personal" labels in their sensitivity label taxonomy. A sensitivity label-based DLP rule is created to prevent files and emails with these labels from being summarized in Copilot responses. Copilot can display these files in citations but cannot use their content to generate responses.

Limitations and Considerations

The following limitations should be considered when configuring Copilot DLP protections:

  • Uploaded files are not scanned: The contents of files that users upload directly into Copilot prompts cannot be scanned by DLP. DLP only evaluates text typed into the prompt.
  • No admin unit support: The Microsoft 365 Copilot and Copilot Chat policy location does not currently support administrative units.
  • Location separation: When the Copilot location is selected, all other DLP locations are disabled. Separate policies are required for protection covering Copilot and other locations.
  • Policy update time: Changes to DLP policies may take up to 4 hours to reflect in the Copilot experience.
  • Preview period user messages: During the Preview period for SIT-based prompt restriction, user messages displayed in Word, Excel, and PowerPoint may not clearly indicate that the restriction stems from an organizational policy. However, the sensitive prompt is restricted in all cases.
  • Calendar invites: Calendar invites are not supported under sensitivity label-based protection.

For detailed information about common challenges and solutions in Copilot deployments, see our Why Copilot Deployments Fail article.

Frequently Asked Questions

Which applications does Copilot DLP protection cover?

DLP protection covers the Copilot experience in Microsoft 365 Copilot, Copilot Chat, Word, Excel, and PowerPoint. Prebuilt agents running within Microsoft 365 Copilot and Copilot Chat are also included in this protection. Organizations must create a new DLP policy or update their existing policy to enable this protection.

What is the difference between SIT-based and sensitivity label-based protection?

SIT-based protection detects sensitive information types (credit card numbers, social security numbers, etc.) in text that users type directly into prompts and completely blocks Copilot from responding. Sensitivity label-based protection prevents files and emails with specific labels from being used in Copilot response summarization. Both conditions can be configured as separate rules in the same policy but cannot be combined within the same rule.

Does Copilot completely stop when DLP is triggered?

With SIT-based restriction, yes, Copilot does not respond to any prompt containing sensitive information and does not use that data in internal or external searches. With sensitivity label-based restriction, Copilot only excludes the content of the labeled source from the response and may continue to respond based on other sources. In both cases, an alert is created in DLP Alerts.

Are files uploaded to prompts also scanned by DLP?

No. Currently, DLP cannot scan the contents of files that users upload directly into Copilot prompts. DLP only evaluates text content typed into the prompt. To provide protection for uploaded files, it is recommended to use sensitivity labels combined with label-based DLP rules.

What is DSPM for AI and how does it integrate with DLP?

DSPM for AI (Data Security Posture Management for AI) is a solution that assesses an organization's overall security posture in AI interactions. It brings together DLP alerts, Activity Explorer data, and user interaction analytics to detect risky behavior patterns. DSPM for AI serves as a centralized management point for measuring the effectiveness of DLP policies, identifying areas for improvement, and taking proactive actions on AI security.