More

    Deepfakes can cause geopolitical rifts. State should fund detection of manipulated videos

    Published on:

    a A notable geopolitical risk of AI is its ability to create deepfakes (audio and visual content that depicts real individuals in fictional situations). Imagine how much damage weaponized deepfakes could do to relations between nations. Countries are taking note of this concern. At G20 Delhi declaration, Member States committed to an approach that fosters innovation in the regulation of artificial intelligence, taking into account the risks posed by the technology. G20 in July 2023 meeting “Crime and Security in the Era of NFTs, AI, and the Metaverse” specifically points to the use of deepfakes by malicious actors as a growing concern.

    according to europolAccording to the European Union Law Enforcement Cooperation Agency, deepfakes have the potential to cause widespread institutional disruption. for example, BBC proven How Boris Johnson and Jeremy Corbyn jeopardize the election by releasing videos that appear to support each other. If a significant portion of the UK population had believed such a video, the election result could have been compromised. The Europol report also highlighted how deepfakes work. Paused Threat to financial markets – Can be used to plummet a company’s stock price by portraying key management in a negative light.

    Deepfakes can also have a major social impact.researcher show One of the most sinister effects of deepfakes is their ability to leave a mark on people’s minds even after they have been disproven, raising concerns about trust and information integrity in society.


    Also read: Why Manoj Tiwari’s deepfake deeply concerns India


    Detection and provenance

    There are two possible ways to combat deepfakes. The first is detection, where an algorithm determines the authenticity of the image. However, this method has its limitations. Common detection methods include looking for visual discrepancies that indicate forgeries. However, according to some people, studywhen the image or video undergoes significant compression or distortion, these indicators can disappear and cause a large number of false positives.

    Intel’s attempt create FakeCatcher, a deepfake detector, is a great example. Deepfakes were identified by studying facial blood flow patterns. As the heart pumps blood, the veins change color. Intel’s detector collected data from blood flow signals across a person’s face in a given image or video and synthesized it to prove its veracity.of BBC report Intel’s detector was better at identifying lip-synced deepfake videos in which the mouth and voice were altered. However, the more distorted the resolution of the video, the more likely it is to be considered fake by a detector, even if it is not faked.

    What further complicates the landscape is the very Nature Creation of deepfakes. At the heart of this technology is a generative adversarial network (GAN). These include a continuous dance between generators that create deepfakes and detectors that try to identify deepfakes. This iterative process of creation and detection means that as soon as one deepfake is identified, the system mutates, making subsequent detections even more difficult.

    The second way to identify deepfakes is through provenance. It embeds metadata into media that tracks attributes such as author, creation date, and editing history. Organizations like the Coalition for Content Provenance and Authenticity (C2PA) has developed an open technology standard for proving and authenticating the origin of media.while researching indicates This method may reduce trust in deceptive content, but it is not foolproof. it was We found that users can become distrustful of even authentic media when provenance data is incomplete. Moreover, without global standards, their effectiveness is limited.


    Also read: Global policymakers don’t understand AI well enough to regulate it.Tech companies need to step up now


    protect our shared reality

    So where does the G20 stand in terms of policy options? Due to the transnational nature of content transmission, simply banning deepfakes will not be effective.Or the U.S. National Security Agency suggest To combat deepfakes, entities are deploying a combination of detection and provenance techniques. Despite the potential for error, detection plays an important role in forensic analysis when no information about provenance exists. However, detection is costly due to the large amount of training data and computing requirements. As an example, Fee Reality Defender, a deepfake detection tool, can cost anywhere from thousands of dollars to millions of dollars to cover the cost of “expensive graphics processing chips and cloud computing power.” This number raises the question of who will pay for the detection costs.

    G20 countries should consider financing the energy transition as inspiration for solutions.To effectively combat climate change, many countries are funds Facilitate the transition of industrial energy from carbon-intensive to low-carbon sources and technologies. The existential impact that deepfakes have on our world is perhaps similar to the threat posed by climate change. Therefore, it stands to reason that countries should step in and fund the deployment and maintenance of deepfake detection going forward, as this pressing threat requires such proactive intervention. Funding and supporting the development of deepfake detection technology is a critical step in protecting digital truth. This is not just an investment in technology, but in the very foundations of our shared reality.

    The author is a consultant on emerging technologies at Koan Advisory.view is personaI

    (Edited by Therese Sudeep)

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here