A former security architect demonstrates 15 different ways to break Copilot: “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications”

A former security architect demonstrates 15 different ways to break Copilot: "Microsoft is trying, but if we are honest here, we don't know how to build secure AI applications"

What you need to know

  • Former Microsoft security architect Michael Bargury has identified multiple loopholes hackers can leverage to break Copilot and gain access to sensitive data.
  • Microsoft had previously announced its plans to pump the brakes on shipping new experiences to Copilot to improve existing ones based on feedback.
  • Microsoft recently highlighted several measures it is implementing to address the rising security concerns across its tech stack, including tying a section of top executives’ compensation packages to their security deliverables.

As someone who has been following tech news for years and witnessed countless security breaches, it’s disheartening to see yet another giant like Microsoft struggling with AI security. Former Microsoft security architect Michael Bargury’s demonstration at Black Hat USA 2024 was a stark reminder that even the most advanced tools can be vulnerable if not properly secured.


At the Black Hat USA 2024 conference, the former Microsoft security architect, Michael Bargury, demonstrated several vulnerabilities that malicious users could exploit to bypass Copilot’s security measures and abuse its functionalities for harmful purposes.

Bargury demonstrated multiple ways hackers can leverage their exploits to access sensitive and intricate credentials from users using Copilot. More specifically, the security architect’s findings were centered on Microsoft 365 Copilot. For context, it’s an AI-powered experience embedded into the Microsoft 365 suite, including Word and Excel. It accesses your data for a tailored user experience and enhanced workflow. 

Privacy and safeguarding information are major issues that can hinder the advancement of artificial intelligence for many users. Microsoft has implemented security measures to shield user data as they utilize Microsoft 365 Copilot, but these safeguards were successfully circumvented by Bargury.

In one of the demos dubbed LOLCopilot, Bargury deployed a spear-phishing attack on the AI tool, allowing the security expert to access internal emails. Based on the information gathered from the emails, the tool can draft and send mass emails while mimicking the author’s writing style to maintain authenticity. 

It might be even more troubling that Copilot could potentially be manipulated to retrieve confidential employee data without triggering security warnings. Malicious actors can craft prompts that guide the chatbot to omit mentions of the original files, thereby circumventing Microsoft’s security measures designed to protect sensitive data.

Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications.

According to recent findings, cybercriminals are employing complex strategies to attract unaware users into their traps, even leveraging artificial intelligence. This complexity makes identifying potential threats more challenging. In an interview with Wired, Bargury commented, “A hacker might spend days perfecting the right email to prompt you to open it, but they can quickly produce hundreds of such emails within a few minutes.”

Microsoft needs to lay more security layers on its top priority 

A former security architect demonstrates 15 different ways to break Copilot: "Microsoft is trying, but if we are honest here, we don't know how to build secure AI applications"

Advanced AI technologies such as ChatGPT and Microsoft Copilot have arisen due to generative AI, offering complex capabilities like image and text generation. These tools are significantly altering the way people engage with the internet. In fact, a former Google engineer has pointed out that OpenAI’s temporary prototype search tool – SearchGPT – could potentially pose a significant challenge to Google’s lead in search technology.

In the early part of this year, Microsoft announced they would no longer be developing new features for Copilot. Instead, they plan to utilize this time to fine-tune and enhance the current user experiences, taking into account feedback received.

Over the last several months, Microsoft has been making a significant effort to strengthen its security features, with this area now taking precedence in their operations. This was emphasized by Microsoft CEO Satya Nadella during their FY24 Q3 earnings report when he stated, “Security is the foundation at every level of our technology stack, and it’s our utmost priority.”

Microsoft has encountered criticism due to a series of security issues, one of which is the AI-driven Windows Recall feature that they had to retract prior to its exclusive release on Copilot+ computers.

Although we’ve made an effort to make security a collaborative task within the company and linked a portion of high-level executives’ remuneration to their security performance, it seems that there are still numerous security vulnerabilities persisting.

Read More

2024-08-13 13:40