More

    North Korean Cyber Threat Escalates with Generative AI

    Published on:

    North Korean hackers are integrating AI into cyberattacks, leveraging generative AI for phishing and social engineering, creating new challenges.

    In particular, cybercriminals are using artificial intelligence (AI) as part of their strategy to steal technology and funding for a country's secret nuclear weapons program.

    The hacker is famous for operations such as the Bangladesh Central Bank robbery and the 2017 WannaCry ransomware attack on the UK National Health Service. They have previously targeted employees of international defense, cybersecurity, and cryptocurrency companies.

    OpenAI and Microsoft reveal how threats are using AI

    OpenAI and Microsoft have confirmed that their AI services are being used for malicious cyber activity by hackers in North Korea, China, Russia, and Iran. But new challenges have surfaced after South Korea used generated AI to identify North Korean hackers who targeted security officials.

    North Korean hackers had limited ability to converse in English or Korean, but using generative AI they can now create authentic profiles on platforms like LinkedIn, making it easier for phishing and social engineering operations. has been improved.

    Microsoft said it has worked with OpenAI to identify and neutralize a number of threats that used or attempted to exploit the AI ​​technology it developed.

    in blogMicrosoft says the technology is in its infancy and not particularly novel or unique, but as U.S. competitors use large-scale language models to improve their capabilities against network compromises and influence operations. He argued that it was important to make it public.

    Defense cybersecurity companies have long used machine learning, primarily to identify anomalous network activity. But aggressive hackers and criminals are also using it, and the cat-and-mouse game has become even more intense with the introduction of large-scale language models led by OpenAI's ChatGPT.

    Using generative AI

    North Korean hackers could pose as recruiters to trick targets into technical exercises and use generative AI to install spyware. North Korean hackers use platforms such as LinkedIn, Facebook, WhatsApp, and Discord.

    ChatGPT and other AI services could help North Korean hackers create more sophisticated malware and dangerous software. Although there are precautions to prevent abuse, people have developed ways to circumvent it. North Korea has used proceeds from illegal cyber operations to finance nuclear and ballistic missile projects and invested in strengthening its cyber capabilities. The country has access to Chinese artificial intelligence services.

    North Korea's AI program

    The National Intelligence Service warned in 2024 that North Korea's AI capabilities could lead to more serious and focused attacks.

    Research shows that North Korea has a well-developed AI ecosystem, with both government and private entities possessing advanced machine learning skills.

    During the coronavirus pandemic, North Korea used AI tools to monitor mask compliance and track symptom detection. The agency has used pattern optimization for nuclear safety and wargaming simulations.

    A private company in North Korea claims to have incorporated deep neural network technology into a security surveillance system with intelligent IP cameras to enable fingerprint, voice, facial and text recognition on mobile phones.

    Study author Kim Hyuk said North Korea's comprehensive AI/ML development strategy covers government, academic and commercial sectors. He added that North Korea is demonstrating a comprehensive approach to developing AI and ML capabilities across industries.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here