Watermarking content is still not enough to distinguish between genuine content and AI-generated content, as cybercriminals can also bypass security features.
This is due to the increasing adoption of AI technologies, particularly generative AI, creating fertile ground for malicious individuals to use to spread misinformation, hate, and other forms of malicious content. This is due to the fact that
And watermarks have emerged as one of the key solutions to distinguish between AI-generated content and real-world content. However, there are also drawbacks.
Also read: Meta's Audio2Photoreal enables photorealistic avatars using audio
Experts say the growth of AI will also involve malicious actors using the technology to spread disinformation, malicious intent, and create panic with fake news.
Watermarks have been seen to be used to help users distinguish between AI-generated content and content that is actually created by humans, but they are also prone to tampering.
according to registerSome visible signatures, such as the Dall-E text-to-image model, are among the easiest to bypass AI detection tools. The article further states that malicious individuals may be able to crop the watermark or copy the image in a way that bypasses direct download.
Shiwei Liu, a computer science professor at the University at Buffalo in the US, said there are experts when it comes to breaking down barriers.
“Watermark technology has been used with a discount because it is not that difficult for someone with knowledge of watermarks and AI to decipher watermarks, remove watermarks, or manufacture watermarks. Because we don’t have one,” says Lyu.
Sam Gregory, an AI expert and executive director of a nonprofit that helps people use technology to advance human rights. think Watermarks can still be manipulated by malicious parties.
For him, this is a “triage tool to reduce harm,” he added, “not a 100% solution.”
But all is not so rosy for the unscrupulous villain. Some invisible watermarks, such as Google DeepMind's SynthID, are difficult to remove because they are “embedded directly into Imagen's system output.”
This type of watermark is not visible to the naked eye, but can be detected using special software or algorithms.
But Liu still believes it could be wiped out by bad actors with “technical know-how.”
“I think watermarks mainly take advantage of people not being aware of their existence. So if they knew they could do it, they would find a way to break it,” Liu said. he said.
Another type of watermark is a visible watermark added to videos, text, or images by companies like Google, Getty, and OpenAI to verify the legitimacy of the content.
The third type of watermark is cryptographic metadata. This tells you when the content was created and how it was edited.
“Rinse away” watermark
According to a University of Maryland study led by computer science professor Sohail Faizi, the AI watermark it wasn't foolproof. They tested all kinds of watermarks and “beat them all.”
“Right now there is no reliable watermark,” he says.
The professor also explained how easy it is for malicious actors to manipulate watermarks in a way he described as “washing them out.”
Another study co-authored by researchers from the University of California, Santa Barbara and Carnegie Mellon University found the same results.
“All visible watermarks are unreliable.” read paper.
A time-sensitive issue
A U.S. Senate staffer who helped draft the AI bill told Fedscope of the need to “keep in step with the bad guys,” adding that the Senate is currently in an era of “education and problem definition.”
“It's like being an Olympian. I know you're looking for this drug, so I'll take another one.”
As many countries go to the polls this year, we cannot overlook the need for measures to minimize deepfakes and misinformation. Last September, Senate Majority Leader Chuck Schumer said it was important to address content authenticity. A time-sensitive issue Given that elections are just around the corner in many countries.
“The problem is that deepfakes actually exist, and people really believe that a candidate is saying something, even though that candidate is completely a creation of AI.” he said after his first speech. Private AI insights forum.
Technology companies take on the challenge
Meanwhile, the Coalition on Content Provenance and Authenticity (C2PA), an initiative of media and technology companies, was launched. This identifies whether the source of the image is the camera or the AI program used to create the image.
It also provides details about when, where and how the image was created, allowing people to see its origin. TikTok is reportedly one of the top tech companies to be proactive with watermarking.
“TikTok shows you the audio tracks that were used. You see the stitches that were made. You see the AI effects that were used,” Gregory said.
Other major technology companies, including Alphabet, Amazon, Meta, and OpenAI, also pledged last year to develop watermarking technology to combat misinformation.