Microsoft 365 Copilot Vulnerable to Zero-Click EchoLeak Exploit, Cybersecurity Researchers Say

Microsoft 365 Copilot, the enterprise-focused artificial intelligence (AI) chatbot that works across Office apps, was reportedly vulnerable to a zero-click vulnerability. As per a cybersecurity firm, a flaw existed in the chatbot that could be triggered via a simple text email to hack into it. Once the chatbot was hacked, it could then be made to retrieve sensitive information from the user’s device and share it with the attacker. Notably, the Redmond-based tech giant said that it has fixed the vulnerability, and that no users were affected by it.

Researchers Find Zero-Click Vulnerability in Copilot

In a blog post, AI security startup Aim Security detailed the zero-click exploit and how the researchers were able to execute it. Notably, a zero-click attack refers to hacking attempts where the victim does not have to download a file or click on a URL for the attack to be triggered. A simple act such as opening an email can initiate the hacking attempt.

The findings by the cybersecurity firm highlights the risks that AI chatbots pose, especially if they have agentic capability, which refers to the ability of an AI chatbot to access tools to execute actions. For example, Copilot being able to connect to OneDrive and retrieving data from a file stored there to answer a user query would be considered an agentic action.

As per the researchers, the attack was initiated using cross-prompt injection attack (XPIA) classifiers. These is a form of prompt injection, where an attacker manipulates the input across multiple prompts, sessions, or messages to influence or control the behaviour of an AI system. The malicious message is often added via attached files, hidden or invisible text, or embedded instructions.

The researchers shared the XPIA bypass via email. However, they also showed the same could be done via an image (embedding the malicious instruction in the alt text), and even via Microsoft Team by excuting a GET request for a malicious URL. While the first two methods still require the user to ask a query about the email or the image, the latter does not require users to take any particular action for the hacking attempt to begin.

“The attack results in allowing the attacker to exfiltrate the most sensitive data from the current LLM context – and the LLM is being used against itself in making sure that the MOST sensitive data from the LLM context is being leaked, does not rely on specific user behavior, and can be executed both in single-turn conversations and multi-turn conversations,” the post added.

Notably, a Microsoft spokesperson acknowledged the vulnerability and thanked Aim for identifying and reporting the issue, according to a Fortune report. The issue has now been fixed, and no users were affected by it, the spokesperson told the publication.

Leave a Comment