More

    Deepfake CFO Dupes Employee $25.6M in Hong Kong AI Scam

    Published on:

    A financial officer at a Hong Kong-based multinational company lost HK$200 million ($25.6 million) in company funds after a fraudster used deepfake AI technology to impersonate the company's chief financial officer during a video conference. reportedly lost.

    Police said the employee received a message last month from someone claiming to be the company's London-based CFO. The person, who turned out to be a scammer, asked the employee to join an “encrypted” video call with four to six other employees.

    At first he was hesitant, but after a video call, the employee was convinced. Because the attendees looked and sounded like people I knew from work. This is the standard in Hong Kong. report.

    Also read: Man loses $600,000 to scammers using face-swapping AI

    deceive financial experts

    The fake CFO wasted no time. He quickly made an urgent appeal to facilitate remittances. Believing that everyone else on the video call was genuine, the victim followed the instructions and ended up making 15 transfers to five local bank accounts.

    The workers agreed to remit a total of HK$200 million, or approximately US$25.6 million at the time. When the employee checked in with headquarters a week later, it was discovered that he had committed fraud. He reported the matter to the police.

    “()In a video conference with multiple people, everyone can [he saw] It was a fake,” said Hong Kong Police Senior Superintendent Baron Chan Shun-ting. Police did not release the name of the company or the names or details of the employees.

    Chan said the video was generated using AI and was created from real online meetings from the past. To add depth and credibility to the scam, the scammers utilized his WhatsApp, emails, and his one-on-one video conferences with staff in Hong Kong.

    “We believe the scammers downloaded the video in advance and used artificial intelligence to add fake audio to use in the video conference,” Chan said, adding in another report:

    “They used deepfake technology to imitate the target’s voice reading the script.”

    Police say cases of money fraud using AI deepfake technology are on the rise in Hong Kong. From July to September 2023, the eight stolen local ID cards were used for 90 loan applications and 54 bank account registrations, Chan said. report By CNN.

    Hong Kong police said the fraudsters had used AI deepfakes at least 20 times to fool facial recognition software by “imitating the person appearing on the ID card.” Police have arrested six people in connection with such scams.

    Deepfake CFO defrauded employees of $25.6 million in Hong Kong AI scam

    AI deepfakes are worrying world leaders

    Experts say that as AI becomes more sophisticated, it will become increasingly difficult to distinguish between real and fake identities. This technology can compromise the security and privacy of your digital identity.

    For example, as highlighted by the Hong Kong incident, deepfakes can be used to create real but fake images and videos that can be used to impersonate another person, including their voice.

    Since OpenAI announced its viral chatbot ChatGPT in November 2022, regulators around the world have started paying more attention to the dangers of AI.

    In America, the senator introduced Late last month, a bipartisan bill was introduced that would allow victims of non-consensual AI-generated pornographic deepfakes to sue the creators of the videos.

    The decision was made after an AI-generated sexually explicit image of Taylor Swift went viral on social media sites such as X. Tens of millions of people viewed the image before the platform formerly known as Twitter blocked searches for the pop singer.

    In China, the country's Cyberspace Administration last year issued new regulations banning the use of AI-generated content to spread “fake news.” The regulations also require deepfake technology providers to clearly label their products as synthetic.

    In India, IT Minister Rajeev Chandrasekhar recently said: warned Social media companies will be held liable for AI deepfakes that people post on their platforms. This comes after an AI-generated semi-nude video of Indian actor Rashmika Mandana. Appeared Online in November.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here