There is a thread in another community regarding some controversies happening in women's chess. I posted to that thread, recommending a book written by WGM Jennifer Shahade who is a multi-time US women's chess champion. I also linked to a review of the book, the url of which contained the book title.
it seems that the title, as chosen by the female author with considerable self-awareness, contains a word that is sometimes used as a sexist slur. You can see the title by clicking the link above. Unfortunately some kind of bot censored the title from both the post, and the review link (to chessbase.com) that I had posted. I was able to fool the bot by changing a few characters, but the bot's very existence is imho in poor taste.
We are adults here, we shouldn't have robots filtering our language. If we act sexist or abusive then humans should intervene, but not bots. Otherwise we are in an annoying semi-dystopia. The particular post I made, as far as I can tell, is completely legitimate.
Frankly, I also hate this sort of word filter. I fully agree with the OP here because the issue is not the specific words that you use, but what you convey through those words within a certain context. The book title is a great example of that, as "bitсh" is partially reclaimed and the author is using it to label a group that she is part of.
It's also damn easy to circumvent this sort of filter, as I've exemplified above, so it's often useless.
It is not a bot that censored your post but the lemmy.ml server itself. For some reason lemmy.ml (and I think lemmy.world) filter some words from posts. If you don't like that you'll have to change to another instance.
I am using feddit.de and I can write "chess bitch" if I want.
Automated censoring is stupid anyway. The people interested in evading the filter will find it trivial, the people who are really so fragile they can't handle seeing mild profanity on the Internet should probably invest in some client-side censoring solutions, or go reinvent Club Penguin or something.
i have to partly disagree. there are certain words which must be censored to prevent right wingers from spamming when mods are offline such as the n-word but yea I think bi*ch is too ambiguous to be filtered.
is there any evidence that this actually happens, or would happen?
all i ever see is humans being blocked or frustrated by the bot. i have never seen any kind of malicious spamming that could have been prevented by such a bot. spammers are normally thwarted by human mods.
check hexbear modlogs you will see it, people trying to find their ways around blocking. However, even such a simple deterrence would prevent some spammers atleast.
I think a good compromise would be to place them on a list to be manually reviewed by humans. I've seen those brigades so I think you make a great point.
The mentality of "I haven't seen it so it must be that it doesn't exist" on some of the replies here is weird.
Not even slurs are so much of a clear case. Two reasons:
When the right-wingers want to vomit their hate discourses, they're damn quick to circumvent this sort of filter.
In some cases, even the usage of words often considered as slurs can be legitimate. It depends on what the word conveys within a certain context; the OP provides an example but I don't mind crafting another if anyone wants. (Or explaining the underlying mechanics.)
A third albeit weak reason (as it's a bit of a slippery slope) would be the possibility that this creates precedent for instance admins and comm mods to say "it's fine to filter specific words, regardless of what they're used to say", once something similar to automod appears. If that happens, they won't stop at slurs, as shown in Reddit.