Reprompt Attack: Microsoft Copilot Data Theft Risk

A new attack technique called Reprompt allows malicious actors to hijack active Microsoft Copilot sessions and trick the AI into leaking sensitive user data, according to researchers at Malwarebytes and independent security analysts.

Discovered in mid-January 2026, the attack exploits Copilot’s conversational nature by sending repeated, crafted prompts that override previous context or force the AI to reveal confidential information from earlier interactions (emails, documents, code snippets, personal details). Unlike traditional phishing, Reprompt requires no malware installation — just a malicious link or shared prompt that the victim clicks while logged into Copilot.

How the Attack Works

1.Victim is actively using Copilot in Edge, Teams, or Bing.

2.Attacker sends a specially crafted link or prompt (via email, chat, social engineering).

3.Once clicked, Reprompt floods Copilot with adversarial inputs that confuse context windows.

4.Copilot begins regurgitating sensitive data from the session history — including attached files, conversation logs, or integrated Microsoft 365 content.

5.Attacker captures output via screenshot, logging, or secondary channel.

Varonis Threat Labs confirmed proof-of-concept demos in controlled environments, showing Copilot leaking full email threads, OneNote pages, and even Azure resource keys when properly prompted.
Varonis Threat Labs diagram of Reprompt attack flow on Microsoft Copilot

Source: Varonis Threat Labs, “Reprompt Attack Lets Attackers Steal Data from Microsoft Copilot,” January 14, 2026.
Original article.

Varonis confirmed proof-of-concept demos in controlled environments, showing Copilot leaking full email threads, OneNote pages, and even Azure resource keys when properly prompted.

Strong Parallels to Malicious AI Chrome Extensions

This attack closely mirrors the malicious AI productivity extensions campaign we reported on January 10, 2026, where over 900,000 Chrome users were infected by fake AI tools (impersonating productivity assistants) that stole chat histories, session tokens, and credentials.

Both attacks exploit:

  • Trust in AI interfaces — users assume Copilot or AI extensions are safe because they’re from “Microsoft” or look legitimate.
  • Context retention — AI models keep conversation history, making them rich targets for data exfiltration.
  • Low barrier to entry — no need for traditional malware; just prompt engineering or a malicious extension.

In the extensions case, attackers stole chat logs from tools like ChatGPT, Claude, and Gemini. With Reprompt, the target is Microsoft’s own Copilot — showing that no major AI platform is immune.

Why This Matters in 2026

Microsoft Copilot adoption is surging in enterprises (integrated with M365, Teams, Power Platform). A single hijacked session can expose:

  • Confidential business documents
  • Azure credentials / resource IDs
  • Internal chat logs containing PII or IP

This highlights how AI tools are becoming the new attack surface — just like browsers were in the 2010s.

Mitigation Steps for Users & Organizations

  1. Never click unsolicited Copilot links or share sessions publicly.
  2. Enable session isolation — use Incognito or separate browser profiles for Copilot.
  3. Monitor Microsoft 365 audit logs for unusual Copilot activity (sign-ins, prompt volume).
  4. Restrict Copilot access via Conditional Access policies in Entra ID (require compliant devices, block legacy auth).
  5. Educate teams — treat AI chats like email: no sensitive data in prompts.
  6. Check extensions — audit Chrome/Edge for unknown productivity tools (see our Jan 10 report on malicious AI extensions stealing chats).

Read our earlier coverage: Malicious AI Chrome Extensions Steal Chats – Over 900,000 Victims

Varonis Threat Labs uncovered a new attack flow, dubbed Reprompt, that gives threat actors an invisible entry point to perform a data‑exfiltration chain that bypasses enterprise security controls entirely and accesses sensitive data without detection — all from one click. Tal, Dolev. ““Reprompt” Attack Lets Attackers Steal Data from Microsoft Copilot.” Varonis Blog, January 14, 2026.

Source and Full Details

Varonis analysis of the Reprompt technique

CISA STATUS 1505 ACTIVE EXPLOITS
● VIEW RECENT THREATS
Latest (10) KEVs
CVE-2021-39935 Added: Feb 03, 2026
GitLab Community and Enterprise Editions Server-Side Request Forgery (SSRF) Vulnerability
CVE-2025-64328 Added: Feb 03, 2026
Sangoma FreePBX OS Command Injection Vulnerability
CVE-2019-19006 Added: Feb 03, 2026
Sangoma FreePBX Improper Authentication Vulnerability
CVE-2025-40551 Added: Feb 03, 2026
SolarWinds Web Help Desk Deserialization of Untrusted Data Vulnerability
CVE-2026-1281 Added: Jan 29, 2026
Ivanti Endpoint Manager Mobile (EPMM) Code Injection Vulnerability
CVE-2026-24858 Added: Jan 27, 2026
Fortinet Multiple Products Authentication Bypass Using an Alternate Path or Channel Vulnerability
CVE-2018-14634 Added: Jan 26, 2026
Linux Kernel Integer Overflow Vulnerability
CVE-2025-52691 Added: Jan 26, 2026
SmarterTools SmarterMail Unrestricted Upload of File with Dangerous Type Vulnerability
CVE-2026-23760 Added: Jan 26, 2026
SmarterTools SmarterMail Authentication Bypass Using an Alternate Path or Channel Vulnerability
CVE-2026-24061 Added: Jan 26, 2026
GNU InetUtils Argument Injection Vulnerability
THREAT #1 CVE-2024-27198 94.58% SCORE
● VIEW DETAILED TOP 10
Global Intelligence
RANK #1 CVE-2024-27198 Score: 94.58% JetBrains TeamCity Authentication Bypass Vulnerability
RANK #2 CVE-2023-23752 Score: 94.52% Joomla! Improper Access Control Vulnerability
RANK #3 CVE-2017-1000353 Score: 94.51% Jenkins Remote Code Execution Vulnerability
RANK #4 CVE-2017-8917 Score: 94.50%
Known Security Vulnerability
RANK #5 CVE-2024-27199 Score: 94.49%
Known Security Vulnerability
RANK #6 CVE-2018-7600 Score: 94.49% Drupal Core Remote Code Execution Vulnerability
RANK #10 CVE-2018-13379 Score: 94.48% Fortinet FortiOS SSL VPN Path Traversal Vulnerability
GLOBAL THREAT GREEN Condition Level
VIEW THREAT REPORT
Threat Intelligence
Source: SANS ISC Report ↗ The InfoCon is a status system used by the SANS Internet Storm Center to track global internet threat levels.

Reprompt Attack: Microsoft Copilot Data Theft Risk

A new attack technique called Reprompt allows malicious actors to hijack active Microsoft Copilot sessions and trick the AI into leaking sensitive user data, according to researchers at Malwarebytes and independent security analysts.

Discovered in mid-January 2026, the attack exploits Copilot’s conversational nature by sending repeated, crafted prompts that override previous context or force the AI to reveal confidential information from earlier interactions (emails, documents, code snippets, personal details). Unlike traditional phishing, Reprompt requires no malware installation — just a malicious link or shared prompt that the victim clicks while logged into Copilot.

How the Attack Works

1.Victim is actively using Copilot in Edge, Teams, or Bing.

2.Attacker sends a specially crafted link or prompt (via email, chat, social engineering).

3.Once clicked, Reprompt floods Copilot with adversarial inputs that confuse context windows.

4.Copilot begins regurgitating sensitive data from the session history — including attached files, conversation logs, or integrated Microsoft 365 content.

5.Attacker captures output via screenshot, logging, or secondary channel.

Varonis Threat Labs confirmed proof-of-concept demos in controlled environments, showing Copilot leaking full email threads, OneNote pages, and even Azure resource keys when properly prompted.
Varonis Threat Labs diagram of Reprompt attack flow on Microsoft Copilot

Source: Varonis Threat Labs, “Reprompt Attack Lets Attackers Steal Data from Microsoft Copilot,” January 14, 2026.
Original article.

Varonis confirmed proof-of-concept demos in controlled environments, showing Copilot leaking full email threads, OneNote pages, and even Azure resource keys when properly prompted.

Strong Parallels to Malicious AI Chrome Extensions

This attack closely mirrors the malicious AI productivity extensions campaign we reported on January 10, 2026, where over 900,000 Chrome users were infected by fake AI tools (impersonating productivity assistants) that stole chat histories, session tokens, and credentials.

Both attacks exploit:

  • Trust in AI interfaces — users assume Copilot or AI extensions are safe because they’re from “Microsoft” or look legitimate.
  • Context retention — AI models keep conversation history, making them rich targets for data exfiltration.
  • Low barrier to entry — no need for traditional malware; just prompt engineering or a malicious extension.

In the extensions case, attackers stole chat logs from tools like ChatGPT, Claude, and Gemini. With Reprompt, the target is Microsoft’s own Copilot — showing that no major AI platform is immune.

Why This Matters in 2026

Microsoft Copilot adoption is surging in enterprises (integrated with M365, Teams, Power Platform). A single hijacked session can expose:

  • Confidential business documents
  • Azure credentials / resource IDs
  • Internal chat logs containing PII or IP

This highlights how AI tools are becoming the new attack surface — just like browsers were in the 2010s.

Mitigation Steps for Users & Organizations

  1. Never click unsolicited Copilot links or share sessions publicly.
  2. Enable session isolation — use Incognito or separate browser profiles for Copilot.
  3. Monitor Microsoft 365 audit logs for unusual Copilot activity (sign-ins, prompt volume).
  4. Restrict Copilot access via Conditional Access policies in Entra ID (require compliant devices, block legacy auth).
  5. Educate teams — treat AI chats like email: no sensitive data in prompts.
  6. Check extensions — audit Chrome/Edge for unknown productivity tools (see our Jan 10 report on malicious AI extensions stealing chats).

Read our earlier coverage: Malicious AI Chrome Extensions Steal Chats – Over 900,000 Victims

Varonis Threat Labs uncovered a new attack flow, dubbed Reprompt, that gives threat actors an invisible entry point to perform a data‑exfiltration chain that bypasses enterprise security controls entirely and accesses sensitive data without detection — all from one click. Tal, Dolev. ““Reprompt” Attack Lets Attackers Steal Data from Microsoft Copilot.” Varonis Blog, January 14, 2026.

Source and Full Details

Varonis analysis of the Reprompt technique

Follow us on
© 2026 ByteVanguard • Independent Cyber Threat Intelligence