In an unsurprising story out of Denver, CO, 29-year-old Ethan Gaines says he’s “firmly against” the regulation of deepfakes, while simultaneously being the main reason why they should be regulated.
“Deepfakes being regulated is an infringement upon my rights!” Ethan told reporters. “I should
Only people who believe they'd benefit from regulating deepfakes are some high profile and/or internet narcissists.
"Boohoo someone made a video of Trump's hemorrhoids and Biden licking them"
Everyone already knows you can easily fake some video without using "AI" for it, we have a whole fucking industry for it pumping hundred movies out every Saturday. We already know you shouldn't believe everything you see.
True. Freedom of speech and of the press is a peculiarly American thing. In virtually all other countries... No, wait. That's the 2nd amendment. What were we talking about?
The past month or so I've started encountering quite a few deepfakes on dating sites. I honestly can't tell they're deepfakes just by looking; the only reason I've realised tell is because they were very obviously Instagram model photos. I reverse image searched them to find where they were taken from and confirm my suspicions that the profile's using stolen photos, only to find that the original photos aren't quite the same. It'll be the exact same shot with the same body but a different face, and with identifying tattoos removed, moles adds, etc.
If they weren't obvious modelling shots that made me want to reverse image search them, I wouldn't have known at all. It makes me wonder how many deepfaked images I've encountered on dating sites already and just not known about because they've been fairly innocuous-looking photos...
That's just disinformation, which again, already exists. 80% of political YT content is someone lying to the viewers to push their agenda. That has nothing to do with deepfakes.
Not exactly. Arguments like "they should be regulated because they can be used for illegal stuff" are moot, since those usages are already regulated. I'm on the fence on the whole regulation thing and I've yet to see any actual realistic examples on how regulation would look.
Is it even logical to regulate ai images specifically, or should we lump it in together with any form of image manipulation?
Okay but can you tell the difference between legal real evidence and illegal false evidence?
The technology is there to create this type of false evidence, it's not going back to the Pandora's box anymore. The truth is that you can't trust a single videotape as 100% evidence alone.