IWF Demands Action On Pedophiles Using Generative AI

    Published on:

    The Internet Watch Foundation warns that generative AI could enable “massive generation of images” that could “overwhelm those fighting against online child sexual abuse.”

    The Body is alarmed by the rapid advances in generative AI over the past year and predicts that AI-generated child sexual abuse material (CSAM) will become even more graphic over time.

    most alarming crime

    The Internet Watch Foundation says generative AI has great potential to improve our lives, but warns that the technology could easily be repurposed for malicious purposes. .

    IWF chief executive Susie Hargreaves told the Guardian: Wednesday The agency’s “worst nightmare has become reality,” he said.

    “Earlier this year, we warned that AI images could soon be indistinguishable from real photographs of children suffering sexual abuse, and that these images could begin to circulate in even greater quantities. We are now beyond that point,” Mr Hargreaves said.

    She added: “Horrifyingly, we are seeing criminals deliberately training AI with images of real victims who have already been abused. Children who have been raped in the past. They’re being put into new scenarios because someone, somewhere, wants to see them.”

    The scale of this problem is already causing major problems for those fighting CSAM.

    In one month, 20,254 AI-generated images were posted to one dark web CSAM forum. IWF analysts identified 2,562 fake incriminating photos created using generative AI.

    More than half of these showed children under the age of 10, and 564 were classified as category A images, the most severe form of child abuse images.

    CSAM Photos of Celebrities

    Generative AI is creating a new category of CSAM. IWF findings show that celebrities are being de-aged and transformed into children by AI tools.

    The age-degraded celebrities are then put into abusive scenarios to satisfy online pedophiles.

    Celebrity children are also targeted, with young people being “naked” for users on darknet forums.

    The IWF says it is becoming increasingly difficult to distinguish these images from genuine CSAM.

    “The most convincing AI CSAM is visually indistinguishable from real CSAM, even by trained IWF analysts. and pose additional challenges for law enforcement.” report.

    Government recommendations

    The UK-based IWF is calling on technology companies around the world to confirm that CSAM violates their terms of service. They would also like to see better training for law enforcement so they can more easily identify these types of images.

    IWF is also calling on the UK government to make AI CSAM a main topic of discussion at the AI ​​Summit to be held at Bletchley Park next month.

    Britain hopes to attract world leaders as well as key figures from the business world to the event. So far, Italian Prime Minister Giorgia Meloni is the only confirmed G7 leader scheduled to attend.


    Leave a Reply

    Please enter your comment!
    Please enter your name here