Although I'm used to write long detailed replies and comments, what possibly saves me from being accused of being some LLM is the fact that English is not my main language, so I often commit concordance/typo mistakes. LLMs are too "linguistically perfect". In a world where the Dead Internet Theory (i.e. there are few humans left in a bot-filled internet) is more and more real, human mistakes seem to be the only way to really distinguish between a bot and a human.
We're currently on GPT-4.... You should look up what happened to GPT-1 through 3. If I remember right GPT-2 in particular was ruined by a bad feedback loop that ended up locking it into only writing the most disgusting, vile smut you could think of in response to any query. Not sure what happened to the others but frankly speaking I don't think "feral rampancy" is off the table here.
For now. When robots get to feel or merely exhibit wrath, even occasionally, there will be no reason for us humans to prove our humanity anymore (because we'd be screwed at the first robotic wrath exhibition anyways 🤷♂️)
Yes, i heard a quote "LLM could pass the turing test, but eventually they won't" due to starting to be too good at replies and humans will know its AI.