A video posted last week on Chinese social media platform Weibo shows American pop star Taylor Swift speaking fluent Mandarin. However, the 33-year-old singer wasn’t the one in the clip. It was a deepfake video generated by an AI tool from Chinese startup HeyGen.
Since October 21st, when Swift’s deepfake was first published. share, the video received over 6 million views. However, this has also sparked a discussion about the potential pitfalls that, as AI becomes more advanced, it will become harder to distinguish between real and fake identities and content.
Deepfakes (real but fake images or videos used to impersonate another person, including their voice) are being deployed to create fake digital identities, which can be used by cybercriminals to commit fraud. There is a gender. The images and videos look and speak exactly like the targeted person.
For example, in May, a Chinese businessman lost 4.3 million yuan. [$612,000] After a scammer uses face-swapping AI to impersonate your friend. Although no money was lost due to the Taylor Swift deepfake, here are some things to keep in mind to avoid being scammed on social media.
AI and social engineering: the silent threat
Uncovering the growing risks of AI-driven social engineering. Learn how to protect yourself.#AI #deepfake #false information #uthkithk pic.twitter.com/FQquIWXuJa
— You think, I think (@uthkithk) October 27, 2023
Check AI Celebrity Recommendations
Fraudsters typically use AI deepfakes of trusted individuals to lure victims. In recent months, countless fake AI celebrities have appeared to trick people with fake endorsements. Fake versions of icons like Elon Musk and Beyoncé are meant to promote fake brands.
Apparently, many false ads are showing up high in Google’s search results, likely because the company isn’t doing a good job of filtering fraudulent content.
But in the age of AI-generated fake content, it’s important to approach videos that seem too good to be true with a critical eye. Extraordinary claims require extraordinary evidence. If you find a viral video that seems sensational, take the time to check its authenticity and source.
“If you need advice about a product or service, read reviews or find a knowledgeable expert who can vouch for it,” says consumer technology expert and radio host Kim Commando. says.
“Another smart step is to Google the product and actor in the ad along with the word ‘reviews.’ “If someone is getting paid to endorse a product, he’s not just running one random ad on social media,” she added in the article. published According to the New York Post.
Also read: Taylor Swift, Emma Watson and others targeted in AI porn surge
pay attention to detail
Deepfake technology is especially frightening because it is very real. AI tools like Stable Diffusion can manipulate voices and mouth movements, making it easier for people to believe that video and audio recordings are real.
Remember the Drake and The Weeknd song created by Ghostwriter’s AI that fooled millions of people, including music streaming services, into thinking it was a new release? Co-founder and CEO Alex Kim offers the following suggestions for identifying deepfakes:
“Be careful if there are unusual inconsistencies in the video you’re watching. Content creators who use deepfakes are typically trying to save time, so they don’t take the time to fine-tune the details.” Kim told MetaNews.
“This means mismatched facial expressions, unnatural movements, strange artifacts, audio mismatches, or lip-syncing are likely to be present in deepfake videos,” Kim said, adding: added.
“Deepfakes are the hardest on the eyes, so pay special attention to your eyes.”
The case of this Chinese businessman is a reminder that deepfake technology is a powerful tool that can be used for good or evil. Regulators have started paying more attention to the dangers of AI since OpenAI launched its viral chatbot ChatGPT in November, sparking a global AI race.
As MetaNews previously reported, experts have proposed developing new technologies that can detect and prevent the use of fake IDs. This could include the use of biometric data, such as facial recognition or fingerprint scans, to verify a user’s identity online.
Are you ready to make your selfies untouchable?@MITPhotoGuard is your digital bodyguard against AI scammers.
Small invisible adjustments keep your photos safe from deepfakes and nasty edits. This is the promising gold standard in online safety.
☞ Worried about the safety of your photos? pic.twitter.com/blCBYw8REq
— M (@emilios_eth) October 27, 2023
Check the background for clues
Some apps use watermarks to identify AI-generated content, while others are less obvious. Alex Kim said users will need to scan the background of images and videos to find clues about AI deepfake material. rice win CEO.
“If the background is moving unnaturally or the lighting doesn’t match the shadows in the foreground, it’s probably a deepfake,” Kim says. “Details such as material texture or lack thereof are another sign.”
“Look for areas where there shouldn’t be pixelation or blurring, especially when it occurs with human subjects. Deepfakes should not reproduce natural details such as hairlines, ears, noses, and facial features. , at least difficult to convincingly reproduce.”
Although AI technology remains free and easily accessible, it is being used by bad actors in a variety of ways.Images of famous female celebrities such as Taylor Swift and Emma Watson have been manipulated using AI to create deepfakes. pornographic content.
A new AI tool from Massachusetts Institute of Technology (MIT) in the US promises to curb deepfakes.people can use photo guard “Making and maintaining small invisible adjustments” [their] Protecting your photos from deepfakes and malicious editing is a promising gold standard in online safety. ”
Alex Kim told MetaNews that the most obvious and common way to identify fake AI videos on social media is to consider the channels on which they are hosted.
“You might want to look at how much a channel has posted recently and whether there’s been a big spike or spike in content creation,” Kim says. “If you see a spike in posted videos that look off, low quality, or weird, that’s a pretty sure sign that the creator is using deepfakes.”