A new report warns that the proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos.
I think the biggest worry for me at this point is what the AI trained on in order to depict these images. It's not victimless if it needs victims of child abuse to train on
Edit: really fucking weird I'm getting down voted for being against AI training on child porn. I'm willing to go down with that ship.
It knows what naked people look like, and it knows what children look like. It doesn't need naked children to fill in those gaps.
Also, these models are trained with images scraped from the clear net. Somebody would have to had manually added CSAM to the training data, which would be easily traced back to them if they did. The likelihood of actual CSAM being included in any mainstream AI's training material is slim to none.
There is likely some csam in most of the models as filtering it out of a several billion image set is nearly impossible even with automated methods. This material likely has little to no effect on outputs however since it's likely scarce and was probably tagged incorrectly.
The bigger concern is users down stream finetuning models on their own datasets with this material. This has been happening for a while, though I won't point fingers(Japan).
There's not a whole lot that can be done about it but I also don't think there's anything that needs to be done. It's already illegal and it's already removed from most platforms semiautomatically. Having more of it won't change that.