Microsoft adds GPT-4 to its Security Copilot Defensive Suite

A new AI security tool that answers questions about vulnerabilities and reverse engineering issues is now in preview.

Cyber ​​security stock image.
Image: Adobe Stock/alvaher

The hands of artificial intelligence continue to reach out to the technology industry.

Microsoft has added a safety copilot, a natural language chatbot that can write and analyze code for its suite of products enabled by OpenAI’s GPT-4 generative artificial intelligence model. The safety copilot, which was announced on Wednesday, is already available in advance to select customers. Microsoft will release additional information through email updates when Security Copilot becomes generally available.

Jump:

What is Microsoft Security Copilot?

Microsoft Security Copilot is a natural language artificial intelligence dataset that appears as a prompt bar. This security tool will be able to:

  • Answer conversational questions like, “What are all the incidents in my business?”
  • Write summaries.
  • Enter information about URLs or code snippets.
  • Point to the sources from which the AI ​​gathered the information.

AI is built on OpenAI’s large language model as well as Microsoft’s security-specific model. This proprietary model draws on established and ongoing global threat intelligence. Companies already familiar with the Azure Hyperscale infrastructure line will find the same security and privacy features bundled with Security Copilot.

SEE: Microsoft launches general availability of Azure OpenAI service (TechRepublic)

How does Security Copilot help you detect, analyze and mitigate threats?

Microsoft is positioning Security Copilot to help IT departments address workforce shortages and skills gaps. Cyber ​​security ‘critically needs more professionals’ said the International Information System Security Certification Consortium (ISC)². According to the consortium’s 2022 workforce study, the gap between cybersecurity jobs and workers worldwide is 3.4 million.

See also  Apple reportedly demos mixed-reality headset to its board

Skills gaps can cause organizations to look for ways to help employees who are newer or less familiar with specific tasks. Security Copilot automates some tasks, so security personnel can type in “check for compromise” alerts to facilitate threat hunting. Users can save notifications and share prompt books with other members of their team; these prompt books record what the AI ​​was asked and how it answered.

Security Copilot can summarize an event, incident or threat and create a shareable report. You can also reverse engineer a malicious script, explaining how the script works.

SEE: Microsoft adds the Copilot AI productivity bot to the 365 plan (TechRepublic)

Copilot integrates with many existing Microsoft security offerings. Microsoft Sentinel (security intelligence and event management tool), Defender (advanced detection and response), and Intune (endpoint management and threat mitigation) can all communicate with Security Copilot and feed information.

Microsoft reassures users that this data and the prompts you provide are secure within each organization. The tech giant also creates transparent audit trails within the AI, so developers can see what questions were asked and how Copilot answered them. Data from the security copilot is never fed back into Microsoft’s big data sources to train other AI models, reducing the chance that one company’s confidential information will answer a question within another company.

Is AI-powered cybersecurity safe?

While natural language AI can fill the gaps for overworked or undertrained staff, managers and department heads need to have a framework in place to keep an eye on the work before the code goes live—after all, AI can produce false or misleading results. (Microsoft has the option to report if Security Copilot fails.)

See also  Customize the macOS Ventura Control Center

Soo Choi-Andrews, co-founder and CEO of security firm Mondoo, pointed out the following concerns that cybersecurity decision-makers should consider before entrusting their teams with the use of artificial intelligence.

“Security teams should approach AI tools with the same rigor they would when evaluating any new product,” Choi-Andrews said in an email interview. “Understanding the limitations of AI is essential, as most tools are still based on probabilistic algorithms that do not always produce accurate results… When considering AI implementation, CISOs must ask themselves whether the technology will help the business grow faster free up revenue while protecting assets and meeting compliance obligations.”

“When it comes to the scale of AI adoption, the landscape is evolving rapidly and there is no one-size-fits-all answer,” said Choi-Andrews.

SEE: As a cybersecurity blade, ChatGPT can cut both ways (TechRepublic)

On March 20, 2023, OpenAI experienced a data breach. “Earlier this week, we took ChatGPT offline due to a bug in an open source library that allowed some users to see addresses from another active user’s chat history,” OpenAI wrote. Blog Entry on March 24, 2023. The open source library for the Redis client, redis-py, has been patched.

Today, more than 1,700 people, including Elon Musk and Steve Wozniak signed a petition for AI companies like OpenAI to “immediately pause training of AI systems stronger than GPT-4 for at least 6 months” in order to “jointly develop and implement shared security protocols.” The petition was launched by the Future of Life Institute, a non-profit organization dedicated to putting artificial intelligence to good use and reducing the possibility of “large-scale risks” such as “militarized AI.”

See also  The right way to add a little bit of pizazz to your web page numbers in Microsoft Phrase

Both attackers and defenders use OpenAI products

Microsoft’s main rival in finding the most profitable uses for natural language artificial intelligence, Google has yet to announce a dedicated enterprise security AI product. Microsoft announced in January 2023 that it is Cyber ​​security is a $20 billion business today.

A few other security-focused companies have tried to add OpenAI’s talking products. ARMO, which makes the Kubescape security platform for Kubernetes, he added ChatGPT for custom controls feature in February. Orca Security added OpenAI GPT-3, which was the state-of-the-art model at the time. cloud security platform in January to provide instructions to customers on how to fix the problem. Skyhawk Security has added the fancy AI model cloud-based threat detection and response productstoo.

Instead, it could be another audible signal to those on it black hat site on the cyber security line. Hackers and giant corporations continues to fight for the most defensible digital walls and to break through them.

“It’s important to note that artificial intelligence is a double-edged sword: while it can benefit security measures, attackers can also use it for their own purposes,” Andrews said.

Source: https://www.techrepublic.com/article/microsoft-security-copilot-gpt-4/