Microsoft addresses single-click security flaw in Copilot that had the potential to compromise data.
We independently review everything we recommend. When you buy through our links, we may earn a commission which is paid directly to our Australia-based writers, editors, and support staff. Thank you for your support!
Quick Overview
- Microsoft has addressed a significant vulnerability in Copilot AI referred to as Reprompt.
- This defect enabled attackers to extract user data through a single-click prompt injection.
- The vulnerability was uncovered by Varonis and subsequently reported to Microsoft.
- The exploit had the potential to access data including file access history and conversation memory.
- Users are advised to be vigilant with links that direct to AI tools and prefilled prompts.
Comprehending the Copilot Vulnerability
Recently, Microsoft’s Copilot, an AI-based assistant, was discovered to possess a critical security weakness dubbed Reprompt. This flaw was detected by the data security company Varonis and has since been corrected by Microsoft. The defect permitted potential attackers to extract sensitive user information via a specifically designed single-click prompt injection.
Mechanism of the Reprompt Exploit
Reprompt enabled attackers to deceive users into clicking a link that seemed authentic but redirected to Microsoft’s Copilot through a web browser. This link contained a specially formatted ?q= parameter with a pre-filled AI prompt, referred to as Parameter 2 Prompt (P2P) injection. After the user’s authenticated Copilot session loaded, the AI would start communicating with a server controlled by the attacker, allowing data extraction such as conversation history, location details, file access history, and more.
Implications of the Attack
Once initiated, the attack could remain effective even if the user closed the Copilot chat window, as the session-level context was leveraged. The attack could bypass detection by client-side tools, since the payload was sent through later AI responses. Varonis noted that no additional user actions or plugins were necessary after the initial click, and there was no restriction on the types of data that could be extracted.
Prevention and User Advice
Although no Common Vulnerabilities and Exposures (CVE) index number has been assigned to Reprompt, and there haven’t been any exploit reports, Varonis recommends that users stay cautious. This includes examining links, particularly those that open AI tools or prefilled prompts, and being cautious about AI requests for personal details. Users should terminate sessions and report any unusual activities promptly.
Recap
Microsoft has promptly addressed a critical security vulnerability in its Copilot AI, which could have enabled attackers to access sensitive user information via a single-click exploit. The flaw, identified by Varonis and termed Reprompt, underscores the importance of being alert when engaging with AI tools. Users are advised to be mindful of links and prefilled prompts to safeguard their online security.