More

    IT Security on the Lookout

    Published on:

    Artificial intelligence techniques, including generative AI, allow malicious actors to modify malware code to increase the speed and variety of attacks.

    You can also create thousands of social engineering attacks and deploy them to increase your odds of success.

    For malware, generative AI engines like ChatGPT allow cybercriminals to create infinite code variations to stay ahead of malware detection engines.

    ChatGPT is a publicly accessible interface to OpenAI’s GPT-3 Generational AI engine, dedicated to natural language processing.

    It has been used to generate some software, but its real strength is that it acts as a conversational AI that provides human-like voices.

    As generative AI technology advances, so do the various ways this technology can be used for malicious purposes. This forces IT security professionals to adapt their defenses and even use AI to fight back.

    More than half of respondents in February investigation Of 1,500 IT decision makers in North America, the UK and Australia, said they believe a ChatGPT cyberattack will be successful this year.

    BlackBerry’s report also found that more than eight in 10 respondents (82%) said they plan to invest in AI-driven cybersecurity over the next two years. Nearly half (48%) say they plan to invest by the end of the year.

    Generative AI used in multiple attack vectors

    Mike Parkin, senior technical engineer at Vulcan Cyber, explains that conversational AI can be used by attackers to craft compelling conversations in emails and other text-based interactions that can aid in social engineering. doing.

    “There are other applications that build on other machine learning engines to create more sophisticated code and help attackers bypass existing defenses,” he says. “It all comes down to the kind of data the AI ​​is trained on for what it was designed for.”

    Especially looking at natural language AI like GPT-3 and ChatGPT, these features can easily enhance an attacker’s ability to generate compelling social engineering hooks.

    “In the right circumstances, it can even be used to script live chat sessions and live conversations,” he says. “This doesn’t even include the possibility of developing code to circumvent existing defenses using machine learning techniques.”

    Perkin said IT security teams can expect to see a new wave of phishing, castnet and spear phishing attacks.

    “But we can expect the defense to adapt quickly as well,” he adds. “We can expect more sophisticated filters for email and text messaging that help identify AI-generated content on the fly.”

    Perkin warns that AI algorithms will only continue to improve with better machine and deep learning models, making life more difficult for cybersecurity practitioners.

    “Our defenses will have to adapt to deploy more AI techniques specifically tuned to counter AI-based attacks,” he says.

    Deploying AI as a defensive resource

    SlashNext CEO Patrick Harr says generative AI will forever change the threat landscape for both security vendors and cybercriminals.

    “It’s important to be prepared to protect your organization with security solutions that use generative AI capabilities to detect these types of threats,” he explains. “Traditional security techniques cannot detect this type of attack.”

    In short, using AI-powered security tools is critical to thwarting such attacks.

    “As chatbots improve and become more versatile, hackers will be able to diversify the types of threats they can offer, increasing the likelihood of a successful compromise,” explains Herr.

    In fact, generative AI technology can be used to develop cyber defenses that can stop ChatGPT-developed ransomware, business email compromises, and other phishing threats.

    Expanding Understanding of AI Defense Capabilities

    Casey Ellis, founder and CTO of Bugcrowd, says security teams are “getting their hands dirty” with flexible interfaces like ChatGPT to adapt to a future where AI is the partner in defending systems and data against cyberattacks. said there is a need. AI current capabilities and limitations.

    “Security leaders should also have the right training and education programs in place to enable staff to work with AI systems,” says Ellis. “We also recommend developing protocols for human-machine collaboration and establishing clear areas of responsibility.”

    Ellis said it is also important to continuously evaluate the effectiveness of AI systems, adjust them as needed to ensure optimal performance, and stay up to date with new threats and evolving technologies in the cybersecurity environment. is added.

    “At the end of the day, cybersecurity is a human problem, accelerated by technology,” he says. “Our industry exists because of human creativity, human failure, and human needs.”

    He says AI is unlikely to take over cybersecurity functions entirely, as human operators bring intuition, creativity and ethical decision-making to the task.

    “However, AI will continue to play an increasingly important role as cybersecurity becomes more sophisticated. Defending effectively and ethically against evolving threats requires a combination of humans and machines. is,” said Ellis.

    Perkin points out that cybersecurity leadership and practitioners must be prepared to deal with the potential wave of new and sophisticated social engineering attacks.

    “User education becomes an even higher priority, as does having a coherent view of the environment and potential vulnerabilities, which can reduce the risk aspect to a manageable level,” he said. increase.

    What to read next:

    Should there be a legally enforceable code of ethics for generative AI?

    Metaverse Patrol: Stopping Cyber ​​Crime, Training Troops

    ChatGPT: authors without ethics

    What Just Broke?: Digital Ethics in the Age of Generative AI

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here