A Telegram user who advertises their services on Twitter will create AI-generated porn of anyone for a price, and has also targeted minors.
A Telegram user who advertises their services on Twitter will create an AI-generated pornographic image of anyone in the world for as little as $10 if users send them pictures of that person. Like many other Telegram communities and users producing nonconsensual AI-generated sexual images, this user creates fake nude images of celebrities, including images of minors in swimsuits, but is particularly notable because it plainly and openly shows one of the most severe harms of generative AI tools: easily creating nonconsensual pornography of ordinary people.
That's a ripoff. It costs them at most $0.1 to do simple stable diffusion img2img. And most people could do it themselves, they're purposefully exploiting people who aren't tech savvy.
I have no sympathy for the people who are being scammed here, I hope they lose hundreds to it. Making fake porn of somebody else without their consent, particularly that which could be mistaken for real if it were to be seen by others, is awful.
I wish everyone involved in this use of AI a very awful day.
It seems like there's a news story every month or two about a kid who kills themselves because videos of them are circulating. Or they're being blackmailed.
I have a really hard time thinking of the people who spend ten bucks making deep fakes of other people as victims.
But fuck dude they aren't taking advantage of anyone buying the service. That's not how the fucking world works. It turns out that even you have money you can post for people to do shit like clean your house or do an oil change.
NOBODY on that side of the equation are bring exploited 🤣
In my experience with SD, getting images that aren't obviously "wrong" in some way takes multiple iterations with quite some time spent tuning prompts and parameters.
It's not like deep fake pornography is "built in" but Stable Diffusion can take existing images and generate stuff based on it. That's kinda how it works really. The de-facto standard UI makes it pretty simple, even for someone who's not too tech savvy: https://github.com/AUTOMATIC1111/stable-diffusion-webui
Img2img isn't always spot-on with what you want it to do, though. I was making extra pictures for my kid's bedtime books that we made together and it was really hit or miss. I've even goofed around with my own pictures to turn myself into various characters and it doesn't work out like you want it to much of the time. I can imagine it's the same when going for porn, where you'd need to do numerous iterations and tweaking over and over to get the right look/facsimile. There are tools/SD plugins like Roop which does make transferring over faces with img2img easier and more reliable, but even then it's still not perfect. I haven't messed around with it in several months, so maybe it's better and easier now.
Thanks for the link, I’ve been running some llm locally, and I have been interested in stable diffusion. I’m not sure I have the specs for it at the moment though.
By the way, if you’re interested in Stable Diffusion and it turns out your computer CAN’T handle it, there are sites that will let you toy around with it for free, like civitai. They host an enormous number of models and many of them work with the site’s built in generation.
Not quite as robust as running it locally, but worth trying out. And much faster than any of the ancient computers I own.
It depends on the models you use too. There's specific training models data out there and all you need to do is give it a prompt of "naked" or something and it's scary good at making something realistic in 2 minutes. But yeah, there is a learning curve at setting everything up.