The decision could set a precedent for future monitoring of people convicted of indecent image offences
A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.
Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.
The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.
Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.
UK legislators have a long history of taking actions not informed by science or reason but rather the popular, often hysteric, opinion.
This case is yet another attempt at tightening screws where they shouldn't be.
AI imagery was produced by Stable Diffusion, the model that, for all we know, did not take real CSAM as inputs and caused no harm to actual children. At the same time, such images are important at discouraging the consumption of real CSAM, with very real children being traumatized.
By banning AI imagery production using safe models, legislators leave no legal way for pedophiles to get something by the harmless means, directing many to the harmful ways as equally illegal, while also prosecuting those who did no harm.
I thought pedophiles looking at CSAM were more likely to attack a child, not less. They are actively fantasizing about it, and that can escalate.
I am basing this belief on what I remember of discussions regarding that "ask a rapist" reddit megathread. Apparently psychologists thought that was horrifying.
The bias with this approach is that it highlights those who did offend, while telling us nothing of those who didn't. This is often repeated throughout research as well.
It's very likely that a lot of child abusers did watch CSAM (after all, if you see no issue in child abuse, there's no issue for you in the creation of such imagery), but how many CSAM viewers end up being abusers and is there an elevated risk? That is the question.
I guess if we'd make an "ask a pedophile" thread instead of "ask a rapist", we could get some insights. Pedophiles, catch the idea!
That's up to everyone. Besides, most pedophiles do have sexual interest towards adults as well, and current means reduce that drive too.
Chemical castration in this context increases misery and makes building healthy adult relationships harder. Most pedophiles do not opt for that, for all I know.
Current therapeutic methods do include suppressing sex drive in case the client struggles with impulse control. Otherwise, it is not offered, but can be given on request.
if you have a folder of AI generated CP and put in a couple of pictures of actual CP it's going to muddle the case as the offender could claim all of them are simply AI generated. Real harm could go unnoticed if those two were to be treated differently.
Additionally, not every offender will stop at AI generated images, and if their curiosity becomes enough they could go on to want to experience "the real thing".
It doesn't matter if you believe it, for those who lived through D.A.R.E and the war on drugs, that argument was common and on plenty of people's lips. It's a stupid argument but I think that's the point OP is trying to make
then why is that person repeating a stupid argument at me? those aren't comparable at all.
A better comparison would be idk, CBD weed with no THC being legal and that being the "gateway" to normal weed. Or buying a knock off product and wanting to try the original. Or looking at AI generated photos people eating spaghetting and wanting to see how it actually looks like
I think the solution here is not banning AI materials outright but to make them identifiable - even by means of digital signatures if you want.
For example, Stable Diffusion could insert particular piece of metadata into the picture containing the signature and proving the image is AI-generated, etc.
Without AI materials, said curiosity may lead people straight to the "real thing", and every darknet or even Telegram dweller will tell you it's frighteningly easy to find it even if you never intended to. With AI materials, people can have a chance to stop there.
Can be baked in pixels, or even better sent to identification for a system similar to what Apple uses to detect CSAM, but as an "alright" ID (but just in police's hands, not on device or something).
But even then, if every pixel gets marked as 'created by AI', it would still be trivial to take real CSAM and run it through an image-to-image generator with denoising turned down to 0.05 and suddenly you have real CSAM that has been marked as 'legal' since it is technically AI generated.
Also, keep in mind that there are several open source projects out there where anyone who knows what they are doing could just strip out any protections that might be put in place.
Yeah but the point is you can't easily add it to any picture you want (if it's implemented well), thus providing a way to prove that the pictures were created using AI and no harm has been done to children in their creation. It would be a valid solution to the "easy to hide actual CSAM between AI generated pictures" problem.
Going to need you to elaborate on this. EXIF data is just bytes in a file, like any of the other bytes in the file. It can be changed and is often changed without the users consent. Are you proposing we create a new type of hardware, something akin to Secure Enclave, and then mass-produce and add it to every consumer CPU to ensure some specific types of exif data isn't tampered with?
I disagree that it should be allowed, but I think their proposal would be something like attaching an identifier to the model, the random seed, the "temperature," and any other relevant parameters that allow exact reproduction of the image without having access to anything but the model. Then you can prove it came from the model.
Here's a thought experiment, though, what would prevent someone from taking a real image and a model, then working with them until they can reproduce a very close approximation of the real image from text and parameter input? These models aren't like a hash function, they can be viewed in reverse to some extent. Backpropagation is how they are trained.
I was thinking of an approach based on cryptographic signatures. If all images that come from a certain AI model are signed with a digital certificate, you can tamper with metadata all you want, you're not gonna be able to produce the correct signature to add to an image unless you have access to the certificate's private key. This technology has been around for ages and is used in every web browser and would be pretty simple to implement.
The only weak point with this approach would be that it relies on the private key not being publicly accessible, which makes this a lot harder or maybe even impossible to implement for open source models that anyone can run on their own hardware. But then again, at least for what we're talking about here, the goal wouldn't need to be a system covering every model, just one that makes at least a couple models safe to use for this specific purpose.
I guess the more practical question is whether this would be helpful for any other use case. Because if not, I hardly doubt it's gonna be implemented. Nobody is gonna want the PR nightmare of building a feature with no other purpose than to help pedophiles generate stuff to get off to "safely", no matter how well intentioned
By banning AI imagery production using safe models, legislators leave no legal way for pedophiles to get something by the harmless means.
Paedophile's are not entitled to, nor should they get "something" or anything when it comes to any desire to engage in sexual abuse of a child.
directing many to the harmful ways as equally illegal
No, that is not the cause nor does it provide any justification.
while also prosecuting those who did no harm.
Consumers of CSAM in ANY form are doing the worst harm. There is no excuse that can be provided to justify this. Take a long look at your life. Children are not sexual objects, AI generated or not.
Try to take your emotion from the discussion. There is finally a way for people with an illness (in this case pedophilia) to "satisfy" urges without causing harm to children. They need professional help which cannot be gained easily in the UK due to a certain government removing funds.
This isn't a give pedos stuff celebration, it's a discussion that needs to happen and if you're not mature enough to not get emotional, don't partake in the conversation.
Here's the problem, it doesn't matter if it was or not. It does, but that's a different issue.
My point is, how do you know it wasn't trained on csam?
You can't possibly. You can point to all the places where csam isn't and say "we haven't found any illegal images yet." But you can't say with 100% certainty that there are none.
And since you can't prove that no csam is used to train the model, any argument beyond that point is moot. If this were almost any other issue I'd say eliminating 99.99% of the risk is completely valid and safe. But we're not talking about a celebrity or a porn star. We're talking about child victims of sexual assault, and to that end we should not accept anything other than absolute certainty. And because absolute certainty cannot exist, we should not simply accept it as a society.
I don't disagree that they require professional medical help, I disagree that they require masturbatory aides in the form of graphic text, images, or video of children in sexual poses/positions, of any sexual nature, or being abused. Computer generated or not.
It's a discussion and I'm mature if I agree with you but not if I don't therefore I can't join the discussion?
My understanding is that CSAM doesn't satisfy anything. Iirc research on the subject suggests that it causes most pedophiles to go out and look for the real thing.
Which scans. How many people watch normal.porn and think: "well, that's good enough" and just stop pursuing a real partner?
Could you please provide such paper? I couldn't obtain the same findings.
The difference between pedophiles and non-pedophiles is that the latter don't have to satisfy themselves with less; it's not morally wrong nor illegal to pursue relationships with an adult partner. It is, however, with children.
No one says pedophiles don't want to have relationships/sex with children after being exposed to either CSAM or AI imagery; but there is a difference between a wish and intention, and if we can help them to keep their wishes at bay, we should.
If dating adults would deeply traumatize them and would be illegal, many people would probably find a relief in porn without a real action. We just don't normally consider this perspective because in reality it's totally okay and we don't have to limit ourselves.
That's why satisfy was in quotations, it's not a black and white matter, for a lot of people this does nothing. But for alot of people this is something that is potentially life altering.
And I agree with what you're saying to an extent. But you watch porn to satisfy an urge, if I watch a certain category of porn it doesn't mean I want to go out and experience that category.
This is a complicated Matter, and someyhing a magistrate is not equipped to deal with.
It's not a matter of entitlement but of a real world harm. And generated imagery involving imaginary children does not constitude child sexual abuse.
I'd gladly give pedophiles generated imagery if that were to stop them from lurking in search of real CSAM, supporting the industry that creates a very tangible harm - actual child abuse.
And my life has nothing to do with either, so don't make it personal. I only share my opinion on what we should really do to protect children, not to protect our deeply rooted views.
Now that's 100% reprehensible. I didn't read the link, but the only excuse I can think of is if it's used to automatically recognise csam, so a human doesn't have to look at it.
The link explains that they are in a dataset used to train a text-to-image model. Images with hashes matching known CSAM. There are tools that could have caught this which this dataset failed to use. Gigantic and repugnant failure. Makes me want to never download a dataset.
Now think of the photos that don't have any matching hashes. Social media has a ton of csam and as long as they scrape from Facebook/insta/twitter or from porn sites with no verification system they will continue to have csam in their training data.