Bots are running rampant. How do we stop them from ruining Lemmy?
Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don't just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.
Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial at all. Seemed to be no pattern in it... One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.
For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.
Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable...
How do we even fix this issue or prevent it from affecting Lemmy??
To help fight bot disinformation, I think there needs to be an international treaty that requires all AI models/bots to disclose themselves as AI when prompted using a set keyphrase in every language, and that API access to the model be contingent on paying regain tests of the phrase (to keep bad actors from simply filtering out that phrase in their requests to the API).
It wouldn't stop the nation-state level bad actors, but it would help prevent people without access to their own private LLMs from being able to use them as effectively for disinformation.
I can download a decent size LLM such as Llama 3.1 in under 20 seconds then immediately start using it. No terminal, no complicated git commands, just pressing download in a slick-looking, user-friendly GUI.
They're trivial to run yourself. And most are open source.