Using Generative AI to Improve Cloud Security Detection and Response

    Published on:

    Detecting malicious activity in the cloud can be difficult for security professionals who must sift through numerous false positive alerts to find legitimate security incidents. A study by cybersecurity vendor Orca Security found that 59% of IT security professionals receive more than 500 public cloud security alerts per day. This flood of alerts creates a situation where defenders are so busy managing trivial alerts that they routinely miss important alerts.

    To meet these challenges, more cybersecurity vendors are turning to generative AI to help security teams understand cloud activity. One such vendor, Skyhawk Security, has integrated generative AI, specifically his ChatGPT API, into its cloud detection and response solution (CDR). The integration includes two of her solutions, Threat Detector and Security Advisor.

    Threat Detector leverages ChatGPT API trained with millions of security signals to analyze cloud events and generate alerts faster. Security Advisor provides a natural language overview of live alerts and recommendations on how to respond and remediate alerts. This automated approach to alert management has proven effective and in 78% of cases the CDR platform generates alerts early using the ChatGPT API.

    Skyhawk Security CEO Chen Burshan said generative AI has improved detection and response for cloud engineers and SOC incident responders. ChatGPT will help him overcome the shortage of cloud-skilled talent by augmenting the power of his SOC.

    Skyhawk Security’s solution also includes machine learning algorithms for monitoring cloud assets. These algorithms can distinguish between normal usage and malicious behavior indicators (MBI). An alert is generated when the MBI exceeds a certain threshold. ChatGPT-trained threat detection capabilities augment the data provided by ML-driven threat scoring mechanisms to improve alert identification and prioritization.

    However, it is essential to recognize the limitations of generative AI in cybersecurity. This is a powerful tool, but should be used judiciously to avoid errors and potential privacy issues. Generative AI is most effective when explaining complex alerts in natural language and providing insight on how to respond.

    By leveraging generative AI, organizations can enhance the decision-making process of security analysts, making it easier to investigate alerts and respond quickly to incidents. This ultimately enhances the protection of on-premises and cloud environments against threat actors.


    Leave a Reply

    Please enter your comment!
    Please enter your name here