A script that goes through a lemmy pict-rs object storage and tries to prevent illegal or unethical content - GitHub - db0/lemmy-safety: A script that goes through a lemmy pict-rs object storage an...
I posted the other day that you can clean up your object storage from CSAM using my AI-based tool. Many people expressed the wish to use it on their local file storage-based pict-rs. So I've just extended its functionality to allow exactly that.
The new lemmy_safety_local_storage.py will go through your pict-rs volume in the filesystem and scan each image for CSAM, and delete it. The requirements are
A linux account with read-write access to the volume files
A private key authentication for that account
As my main instance is using object storage, my testing is limited to my dev instance, and there it all looks OK to me. But do run it with --dry_run if you're worried. You can delete lemmy_safety.db and rerun to enforce the delete after (method to utilize the --dry_run results coming soon)
PS: if you were using the object storage cleanup, that script has been renamed to lemmy_safety_object_storage.py
as the dev said, it flags a lot of false positive. so a human should look at them anyway.
maybe when this is a bit more evolved, we can use it to preprocess posts, and if a post gets flagged for something, a mod / admin needs to approve the post manually.
maybe for CASM, it gets sent to an external service specialized to that stuff, so the mod / admin doesn't have to look at the images.
While it's not the case for this project I'm sure there's some poor researcher our there who trained a model on actual confiscated CSAM. Or most likely overworked traumatized thirld world content moderators employed by the likes of Meta.