Skip Navigation
59 comments
  • This is the best summary I could come up with:


    The “worst nightmares” about artificial intelligence-generated child sexual abuse images are coming true and threaten to overwhelm the internet, a safety watchdog has warned.

    Other examples of child sexual abuse material (CSAM) included using AI tools to “nudify” pictures of clothed children found online.

    Its latest findings were based on a month-long investigation into a child abuse forum on the dark web, a section of the internet that can only be accessed with a specialist browser.

    The IWF said the vast majority of the illegal material it had found was in breach of the Protection of Children Act, with more than one in five of those images classified as category A, the most serious kind of content, which can depict rape and sexual torture.

    Stability AI, the UK company behind Stable Diffusion, has said it “prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM”.

    The government has said AI-generated CSAM will be covered by the online safety bill, due to become law imminently, and that social media companies would be required to prevent it from appearing on their platforms.


    The original article contains 561 words, the summary contains 190 words. Saved 66%. I'm a bot and I'm open source!

  • There is a potential for proliferation of CSAM generated by AI. While the big AI generators are centralized and kept clear of most bad stuff, eventually unrestricted versions will become widespread.

    We already have deepfake porn of popular actresses, which I think is already harmful. There's also been sexually explicit deepfakes made of preteen and young teenage girls in Spain, and I think that's the first of many similar incidents to come.

    I can't think of a way to prevent this happening without destroying major potential in AI.

59 comments