Consider this: you’re a new security operations coordinator at a huge corporation that is dealing with dozens of ransomware attacks every day. You must assess, comprehend, and create a threat defense strategy on your first day.
Naturally, Microsoft believes that generative AI may help a lot here, and now, after a year of beta testing, Microsoft is formally launching Copilot for Security, a platform that could make that first day go much more easily.
In some ways, Copilot for Security (formerly known as ‘Security Copilot’) is similar to a customized version of the Copilot generative AI (based on GPT-4) found in Windows, Microsoft 365, and the increasingly popular mobile app, but with enterprise-level security at its core.
When it comes to security, businesses of all sizes require all the assistance they can get. According to Microsoft, there are 4,000 password attacks every second and 300 distinct nation-state crime actors. According to the business, one of these attackers can obtain complete access to your data within 72 minutes of someone in your organization clicking on a phishing link. These attacks cost corporations trillions of dollars each year.
During demos, I was shown how Microsoft Copilot for Security functions as an intense and ultra-fast security consultant, able to scan through complex hash files and scripts to determine their true intent and swiftly identify both known dangers and things that behave like existing threats. Microsoft argues that employing such a service will help to address the security personnel talent gap.

The platform, which will be paid based on usage and the number of security compute units used (Microsoft refers to this as a “pay-as-you-go” model), is clearly not a do-er. At this time, it will not delete or block any suspicious files or emails. Rather, it seeks to explain, guide, and recommend. Furthermore, because it’s a prompt-based system, you can ask it specific queries Its analysis. If Greg in IT is discovered downloading or modifying hundreds of files, you might request more information about his activity.
Microsoft Copilot for Security is intended to interface with Microsoft products, although it can also function with a variety of plugins.
It can also assess other generative AI platforms and detect when employees begin exchanging sensitive, private, or even encrypted company information with these chatbots. If you’ve configured rights to prevent such files from being shared with specific third-party chatbots, you may apply that rule, and Copilot for Security will recognize the file security, add a ‘confidential’ label, and automatically block sharing.
The advantage of using an AI is that you can examine a threat using normal language rather than sifting through menus for the appropriate tool or action. It becomes a two-way street, with a sophisticated security-aware technology that understands the context of your communication and can delve in and guide you in real time.
Emphasis on aid
Despite all of this research and suggestions, Microsoft Copilot for Security does not take action and instead relies on Windows Defender or other security solutions for mitigation.
Microsoft says that Copilot for Security can help practically any security expert become more effective. The company has been beta-testing the platform for a year, and the initial results are promising. The company discovered that Copilot for Security helped newcomers to the field be 26% faster and 34% more accurate in their threat assessments, while experienced users were 2% faster and 7% more productive.
More crucially, Microsoft asserts that the approach is 46% more accurate than without using it for security summarization and incident analysis.
Copilot for Security will be generally available on April 1, and this is no joke.