More

    Artists “Poison” Generative AI to Protect their Work

    Published on:

    Artists have devised ways to leverage technologies like Nightshade to “poison” generative AI algorithms to combat threats to their work.

    This comes as generative AI becomes more prevalent, allowing unskilled artists and musicians to create music and art with little or no effort, negatively impacting the industry. Midjourney stable diffusion It is used to produce images that look almost real.

    But this new technology allows artists to adjust images by a few pixels in a way that “looks fine to humans, but poisons the well for AI tools.”

    Also read: Indian epic 'Mahabharata' comes back to life in the Metaverse

    Nightshade to the rescue

    Other AI image models, such as Adobe's Firefly and Google's AI image models meta, Please use licensed materials when generating content. For example, Adobe signed deals with Shutterstock and Getty Images after building its own AI generator trained on library images.

    However, many other AI image generation tools have been successful and are trained by scraping source material from the open web.

    As a result, this practice has polarized opinion within the art industry and caused dissatisfaction among creators who create and share artwork online.

    Technologists, on the other hand, equate AI with human art students who are simply drawing inspiration from works of art.

    Efforts have been made to use no-follow commands on web pages to stop scraping, but this is not always followed and has not had much success. However, a new tool called Nightshade has been developed that allows an artist to change the way he trains his AI to his advantage.

    according to tom's guidethis leads to “unintended consequences when generating images”.

    Using night shades

    Nightshade alters the pixels of an image in subtle ways, but it damages AI algorithms and computer vision technology. However, the image remains unchanged to the human natural eye.

    For example, if a user asks an AI image generator for a red balloon against a blue sky, it might display an image of a watermelon or an egg instead of a red ball.

    According to Tom's Guide, this alone proves that the image generator's dataset is contaminated, as explained by TJ Thomson of RMIT University.

    “The greater the number of 'poisoned' images in the training data, the greater the confusion,” Thomson explained. conversation.

    “Due to generative AI, the damage caused by a ‘poisoned’ image also affects related prompt keywords.”

    What does that mean?

    Some text-to-image generators are trained on licensed data, while those trained on publicly accessible content face lawsuits and criticism from artists. I am.

    “A moderate number of Nightshade attacks can destabilize the general functionality of text-to-image generative models, effectively disabling their ability to generate meaningful images,” says Nightshade The University of Chicago team will arXiv Preprint server.

    According to the newspaper, companies see the tool as a “last line of defense” that they can use if they fail to comply with no-scraping rules or continue to use copyrighted material to generate images. It is said that there is.

    Although this looks like a battle, image generators may make tweaks to their algorithms to counteract the changes caused by Nightshade.

    According to Tom's Guide, companies fine-tune their models by having humans rank different images. Ahead of the release of Midjourney version 6, developers asked humans to “rank the most beautiful” of two images.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here